Ouchface
Register | User Profile | Member List | F.A.Q | Privacy Policy | New Blog | Search Forums | Forums Home
Doomworld Forums : Powered by vBulletin version 2.2.5 Doomworld Forums > Classic Doom > Doom Editing > Vanilla mappers: How much of a problem is the BLOCKMAP limit?
Pages (2): « 1 [2]  
Author
All times are GMT. The time now is 01:40. Post New Thread    Post A Reply
fraggle
Filled with the code of Doom


Posts: 7725
Registered: 07-00


Yeah, I noticed that the header seems to be redundant.

Old Post 12-21-13 08:35 #
fraggle is offline Profile || Blog || PM || Email || Homepage || Search || Add Buddy IP || Edit/Delete || Quote
Maes
I like big butts!


Posts: 12648
Registered: 07-06


Speaking of header redundancy, while investigating Mocha's demo compatibility, I discovered that an apparently "harmless" optimization I had made, which skipped the first element in a blockchain (on the grounds that it was always (?) 0x0000 and thus redundant) resulted in mysterious demo desyncs in one of the Plutonia IWAD's demos which formerly functioned correctly even within Mocha Doom (!).

So what gives? Is the "zero linedef" header really necessary in checks, or do some blockmaps actually carry an information payload there? Only an analysis will tell....again, I wonder if this is a common optimization/option for some nodebuilders.

In any case, all those tricks really result in scrounging the bottom of the barrel for a handful of bytes...

Old Post 12-21-13 14:11 #
Maes is offline Profile || Blog || PM || Homepage || Search || Add Buddy IP || Edit/Delete || Quote
fraggle
Filled with the code of Doom


Posts: 7725
Registered: 07-00



Maes said:
So what gives? Is the "zero linedef" header really necessary in checks, or do some blockmaps actually carry an information payload there?
It's possible that some blockmap builders already assume that a leading "0" is redundant, so just start using the first entry to include real lines. The redundant header is just like having linedef #0 in every block.

Back to business: I started experimenting with my ideas. The output below is from my test script trying every (16, 16) block alignment and counting the number of intersections between lines and the block grid at that alignment.

You can see that a +(64, 64) offset is a very good choice (4788 intersections) while a +(80, 112) offset would be a very bad one (5738 intersections).

The level I'm testing with has 8129 linedefs, so you can assume that each line has to appear in at least one block, and each time there's an intersection it appears in another block. As a (very rough) estimate, that's (8129 + 4788) * 2 = 25834 bytes for the best case and (8129 + 5738) * 2 = 27734 bytes for the worst. That saves a couple of KB (about 7%), which isn't much - I was hoping for more. The effectiveness of this technique probably depends very much on the level geometry.

code:
8129 linedefs 0, 0: 4881 intersections 0, 16: 5269 intersections 0, 32: 5042 intersections 0, 48: 5284 intersections 0, 64: 4861 intersections 0, 80: 5270 intersections 0, 96: 5047 intersections 0, 112: 5297 intersections 16, 0: 5241 intersections 16, 16: 5629 intersections 16, 32: 5402 intersections 16, 48: 5644 intersections 16, 64: 5221 intersections 16, 80: 5630 intersections 16, 96: 5407 intersections 16, 112: 5657 intersections 32, 0: 4967 intersections 32, 16: 5355 intersections 32, 32: 5128 intersections 32, 48: 5370 intersections 32, 64: 4947 intersections 32, 80: 5356 intersections 32, 96: 5133 intersections 32, 112: 5383 intersections 48, 0: 5215 intersections 48, 16: 5603 intersections 48, 32: 5376 intersections 48, 48: 5618 intersections 48, 64: 5195 intersections 48, 80: 5604 intersections 48, 96: 5381 intersections 48, 112: 5631 intersections 64, 0: 4808 intersections 64, 16: 5196 intersections 64, 32: 4969 intersections 64, 48: 5211 intersections 64, 64: 4788 intersections 64, 80: 5197 intersections 64, 96: 4974 intersections 64, 112: 5224 intersections 80, 0: 5322 intersections 80, 16: 5710 intersections 80, 32: 5483 intersections 80, 48: 5725 intersections 80, 64: 5302 intersections 80, 80: 5711 intersections 80, 96: 5488 intersections 80, 112: 5738 intersections 96, 0: 5060 intersections 96, 16: 5448 intersections 96, 32: 5221 intersections 96, 48: 5463 intersections 96, 64: 5040 intersections 96, 80: 5449 intersections 96, 96: 5226 intersections 96, 112: 5476 intersections 112, 0: 5320 intersections 112, 16: 5708 intersections 112, 32: 5481 intersections 112, 48: 5723 intersections 112, 64: 5300 intersections 112, 80: 5709 intersections 112, 96: 5486 intersections 112, 112: 5736 intersections

Old Post 12-22-13 19:23 #
fraggle is offline Profile || Blog || PM || Email || Homepage || Search || Add Buddy IP || Edit/Delete || Quote
Maes
I like big butts!


Posts: 12648
Registered: 07-06


An even sadder way to scrounge for bytes: Since vanilla actually uses signed pointers to blockchains, if you can somehow guarantee that when loading a level, the lump before BLOCKMAP (which is REJECT, another sad story...) will be contiguous or at least within a predictable offset from BLOCKMAP, you could store some blockchains in there, which would then be pointed to by those "negative pointers"...Hmm...maybe that's what's really going on with some "corrupt" REJECT lumps on some PWADs? O_o

Old Post 12-22-13 22:05 #
Maes is offline Profile || Blog || PM || Homepage || Search || Add Buddy IP || Edit/Delete || Quote
wesleyjohnson
Senior Member


Posts: 1030
Registered: 04-09


The spec may say Signed, but I assume Unsigned, and many of the generators are not paying attention.
If you assume Unsigned, then the pointer space is 128K. You only need to recognize 0xFFFF (instead of testing for -1). This is already compatible with generators like the one they used for Europe.wad.

Being that Europe.wad rolled the pointer over 3 times, I don't think it stayed within the 128K pointer limit. Don't know right now about any other limits.

Entries sharing the empty blockmap is the biggest gain in compression.
They can all point to the one, 0xFFFF.

Any changes that make the blockmap less accurate is going to cause problems. You cannot assume about other ports, some of us have some new and innovative ways to use existing data. I would not know of conflicts within DoomLegacy code until it was tried.
I would oppose any format that cannot be expanded back into a fully accurate blockmap.

Are you really trying to invent more compression schemes that can be handled by vanilla code, or are you trying to invent a way to encode more pointer space.

One idea would have the writer align all blocks on 16 bit boundaries in the blockmap, then drop the redundant lower 4 bits in the pointers.
It is really an index instead of a pointer.
The reader can then regenerate unaligned blocks as it reads, by re-adjusting into 32 bit pointers. I did not go and examine the blockmap format to how much effect this has.

Last edited by wesleyjohnson on 12-22-13 at 23:16

Old Post 12-22-13 23:11 #
wesleyjohnson is offline Profile || Blog || PM || Search || Add Buddy IP || Edit/Delete || Quote
Maes
I like big butts!


Posts: 12648
Registered: 07-06



wesleyjohnson said:
Are you really trying to invent more compression schemes that can be handled by vanilla code, or are you trying to invent a way to encode more pointer space.


The former. Extending the format with 32-bit blockmap pointers and even linedef pointers plus unlimited disk space would be trivial, if vanilla compatibility wasn't a requirement. The extension to 128 KB by using unsigned pointers is more of a de-facto standard for Boom maps, which just happens to be backwards-compatible with vanilla blockmaps (but vanilla Doom is not forwards-compatible with it).

Old Post 12-22-13 23:49 #
Maes is offline Profile || Blog || PM || Homepage || Search || Add Buddy IP || Edit/Delete || Quote
fraggle
Filled with the code of Doom


Posts: 7725
Registered: 07-00



Maes said:
An even sadder way to scrounge for bytes: Since vanilla actually uses signed pointers to blockchains, if you can somehow guarantee that when loading a level, the lump before BLOCKMAP (which is REJECT, another sad story...) will be contiguous or at least within a predictable offset from BLOCKMAP, you could store some blockchains in there, which would then be pointed to by those "negative pointers"...Hmm...maybe that's what's really going on with some "corrupt" REJECT lumps on some PWADs? O_o
Won't work. Remember that lumps are cached into memory by the zone memory system. Location on disk isn't location in memory. You could maybe do something with the fact that the REJECT lump is loaded before the BLOCKMAP lump and make assumptions about where they're going to be put in memory because of that but it's never going to be deterministic and ... I really just don't want to even go there.

Old Post 12-23-13 12:40 #
fraggle is offline Profile || Blog || PM || Email || Homepage || Search || Add Buddy IP || Edit/Delete || Quote
Maes
I like big butts!


Posts: 12648
Registered: 07-06



fraggle said:
make assumptions about where they're going to be put in memory


But... but...

http://codinghorror.typepad.com/.a/6a0120a85dcdae970b0128776ff992970c-pi

:-(

Old Post 12-23-13 13:45 #
Maes is offline Profile || Blog || PM || Homepage || Search || Add Buddy IP || Edit/Delete || Quote
Jonathan
I am not a leet hax0r :(


Posts: 753
Registered: 05-00


Just an idea, since my knowledge of Doom engine code is virtually nil, but could you save some space, and speed up blockmap processing, by adding dummy linedefs, post node-building, that simplify lines with multiple sections?

E.g. in the image below, the lines on the left could be represented by those on the right for the purposes of collision detection. Since they wouldn't belong to any segs, they wouldn't be rendered, but they could be included in blocklists.

http://i.imgur.com/jHlVj3n.png

Old Post 12-23-13 15:03 #
Jonathan is offline Profile || Blog || PM || Email || Search || Add Buddy IP || Edit/Delete || Quote
Maes
I like big butts!


Posts: 12648
Registered: 07-06


That's a fairly restrictive scenario, but I don't see why it shouldn't work. The only catch is that it would make a map virtually uneditable, if you opened it up in any editor.

Old Post 12-23-13 16:53 #
Maes is offline Profile || Blog || PM || Homepage || Search || Add Buddy IP || Edit/Delete || Quote
fraggle
Filled with the code of Doom


Posts: 7725
Registered: 07-00



Jonathan said:
Just an idea, since my knowledge of Doom engine code is virtually nil, but could you save some space, and speed up blockmap processing, by adding dummy linedefs, post node-building, that simplify lines with multiple sections?
Took me a couple of minutes to understand exactly what you were proposing here, but yes, that possibly could work. But I think it would be too difficult in practise to actually write a blockmap builder that did this.

You'd have to impose careful restrictions on the linedefs you're merging - eg. you can't mix impassible and passible lines, and you can't use it for special lines. I wonder if there are other gotchas that might make it completely unworkable: for example, I believe the engine requires all lines to have a front sidedef, which means the lines have to belong to a sector. I wonder if having overlapping sectors like that might be problematic as well (I think probably not, but I'm not sure).

As with all these ideas it's a matter of cost/benefit. I'm skeptical of how much space a trick like this would actually save.


Maes said:
But... but...
Even if it could be made to work in DOS with Vanilla Doom, I don't think you can rely on it *always* working. For example, REJECT and BLOCKMAP are probably usually located consecutively in memory. But if your heap is fragmented (after playing through several levels, or after restarting the game or loading/saving several times) then you can't be sure that assumption holds, even in Vanilla. It's a clever trick but it's a recipe for nasty, difficult to reproduce bugs.



wesleyjohnson said:
Any changes that make the blockmap less accurate is going to cause problems. You cannot assume about other ports, some of us have some new and innovative ways to use existing data. I would not know of conflicts within DoomLegacy code until it was tried.
I would oppose any format that cannot be expanded back into a fully accurate blockmap.

Are you really trying to invent more compression schemes that can be handled by vanilla code, or are you trying to invent a way to encode more pointer space.

The former, not the latter. The objective/challenge is to write a blockmap builder that can fit large levels below the Vanilla 64k limit. I'm not interested in defining new standards or formats for blockmap lumps. This investigation has nothing to do with Chocolate Doom.

I'm pretty sure that none of what I'm proposing ought to break any existing ports, unless you can provide a counterexample. I already provided something of an answer this in reply to Maes here ("I'm pretty sure this isn't a problem..." onwards).

Old Post 12-24-13 00:27 #
fraggle is offline Profile || Blog || PM || Email || Homepage || Search || Add Buddy IP || Edit/Delete || Quote
All times are GMT. The time now is 01:40. Post New Thread    Post A Reply
Pages (2): « 1 [2]  
Doomworld Forums : Powered by vBulletin version 2.2.5 Doomworld Forums > Classic Doom > Doom Editing > Vanilla mappers: How much of a problem is the BLOCKMAP limit?

Show Printable Version | Email this Page | Subscribe to this Thread

 

Forum Rules:
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is OFF
vB code is ON
Smilies are OFF
[IMG] code is ON
 

< Contact Us - Doomworld >

Powered by: vBulletin Version 2.2.5
Copyright ©2000, 2001, Jelsoft Enterprises Limited.