Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Quasar

What node builders support 65535 segs and how?

Recommended Posts

I am curious if anybody has comprehensive information on what node builders can be used to build maps that exceed the normal limit of 32767 of certain map entities such as segs, and what method they use to accomplish this.

By method, I mean which of the following do they do:

1. Keep track of internal indices as unsigned shorts or long ints, using -1 or 65535 as a special value to mean none. This is simple unsigned extension of the existing node format.

2. Switch to zdoom node format when the map gets too large.

3. Switch to some other node format when the map gets too large.

I am not interested in node builders which do #2 or #3 because I have no plans to support non-DOOM-format nodes until/unless the community agrees on a standard that doesn't require external libraries to load (no zlib bullshit for me please).

I mainly just need to know if there are any builders which have taken approach #1 before I start trying to expand the limits in Eternity. If there are no such builders, then expanding the limits will be useless.

Share this post


Link to post
Quasar said:

I am not interested in node builders which do #2 or #3 because I have no plans to support non-DOOM-format nodes until/unless the community agrees on a standard that doesn't require external libraries to load (no zlib bullshit for me please).

Care to explain why? zlib is free and easy to use so I really don't see the problem. Right now ZDoom's compressed node format is the only one that offers high vertex precision and large map support for non-GL nodes.

Share this post


Link to post

Your choice #1 is not valid, as you cannot support more than 65534 segs in the standard wad structures.

When there are > 65534 segs, GLBSP will switch to V5 format for GL-nodes, and zdoom format for normal nodes.

Do everyone a favour and pick an existing method: either support zdoom format, or load GL nodes (just drop the minisegs to convert them to normal nodes).

Zlib is small, easy to link statically if you want, and no hassles with the license. This thing about "extra dependencies" smells like a lot of crap to me.

Share this post


Link to post
Quasar said:

If there are no such builders, then expanding the limits will be useless.

Nah, I think it's more like once you expand the limits, current nodebuilders will soon catch up.

Ajapted said:

Your choice #1 is not valid, as you cannot support more than 65534 segs in the standard wad structures.

Sheeeeeeeeesh, he was off by 1, let it slide. Up to 65534 then.

Share this post


Link to post
Lüt said:

Nah, I think it's more like once you expand the limits, current nodebuilders will soon catch up.

In response to that:

Ajapted said:
Do everyone a favour and pick an existing method: either support zdoom format, or load GL nodes (just drop the minisegs to convert them to normal nodes).

The one thing we do not need is yet another node format. The problem has already been solved.

Share this post


Link to post
Graf Zahl said:

The one thing we do not need is yet another node format. The problem has already been solved.

Yes, but in response to that:

Quasar said:

(no zlib bullshit for me please)

:P

Share this post


Link to post
Lüt said:

Sheeeeeeeeesh, he was off by 1, let it slide. Up to 65534 then.

Perhaps I misunderstood the question. Supporting 32768-65535 segs is trivial, just treat the seg numbers in SSECTORS as unsigned 16 bit. Supporting more requires a new format like zdoom or gl nodes.

I think it is sidedefs that are limited to 65534, again just treat the field as unsigned, and you should use 0xFFFF for the special "NONE" value instead of -1.

P.S. In these cases GLBSP simply writes the 16-bit field as if it were unsigned, but displays a warning message.

Share this post


Link to post
Lüt said:

Yes, but in response to that:
:P [/B]

>Quasar said:
>(no zlib bullshit for me please)

Which IMO is a totally irrational attitude. What's so bad about adding a completely free library that is no problem to work with whatsoever? The worst thing it can do is make the executable slightly larger but to be honest, who cares about a few kilobytes today?

Writing a compressed nodes loader can't take longer than a few hours. Once it's done: no more problems ever!

Share this post


Link to post
Ajapted said:

Perhaps I misunderstood the question. Supporting 32768-65535 segs is trivial, just treat the seg numbers in SSECTORS as unsigned 16 bit.

OK, yes, that's exactly the predicament. We were looking to double the existing limit, not go beyond 65535. I was aware that doubling the limit was a far easier task than indefinitely increasing it, so that was the suggestion that started this. I thought you were being nitpicky because he said 65535 instead of 65534 :P

Graf Zahl:
Which IMO is a totally irrational attitude. What's so bad about adding a completely free library that is no problem to work with whatsoever? The worst thing it can do is make the executable slightly larger but to be honest, who cares about a few kilobytes today?

I don't know; I sure don't. I don't know any of the technical details. All I know is that Espi and I have a map which is barely 2/3 complete with a current seg count of over 34,000 and can't be finished until this gets fixed, so we're going to keep whining until it does :P

Obviously Quasar has his reasoning, but this is one issue I can't contribute anything but personal requests toward, so I'll just sit back and watch the debate unfold.

Share this post


Link to post
Lüt said:

OK, yes, that's exactly the predicament. We were looking to double the existing limit, not go beyond 65535. I was aware that doubling the limit was a far easier task than indefinitely increasing it, so that was the suggestion that started this. I thought you were being nitpicky because he said 65535 instead of 65534 :P

If you are satisfied with 65535 segs it is surely enough to make the seg index unsigned and leave it at that.

But rest assured, once the first limit is broken some mapper will reach the next one sooner or later. I have seen maps with well over 90000 segs (120000 with GL nodes) that still don't exceed the sidedef limit.

Share this post


Link to post

Oh yeah I know, from the very beginning of my entry into the project back in 99, my grand finale level had to be split into 4 separate maps because of the 32k limit, but now that portals are around, I'm going to have to split it up even further. That's where the eventual hub system comes in, because really, a map that size is best off with such transitional points at key locations in order to be remotely manageable.

But for now, getting this one map to work will be enough, and I don't foresee any future maps in the project (except the foremetioned last one) exceeding that limit.

Share this post


Link to post

Mainly, besides not wanting to increase Eternity's dependence on external libraries, I don't see the point of compressing individual wad lumps. WADs are compressed for distribution, and hard drives are so big now that I don't think anyone is worried about how much space a few DOOM maps' wad lumps are using.

If they were several megabytes or something, maybe I could see the point. As is, all it does is require a lot of extra code and some extra processing time for more or less no benefit.

Also, you guys are miscounting. Apply the fencepost theorem:

65534 - 0 + 1 = 65535.

Taking 65535 to stand for -1 removes only one index. But the number of numbers counting from 0 to 65535 is 65536, not 65535. 0 is a valid index too.

Share this post


Link to post

Quasar said:
Mainly, besides not wanting to increase Eternity's dependence on external libraries,

If you are so worried about external dependencies you can always include the source and distribute it with yours. Then it just becomes some additional code and is no different than the other code you got from somewhere else.

I don't see the point of compressing individual wad lumps. WADs are compressed for distribution, and hard drives are so big now that I don't think anyone is worried about how much space a few DOOM maps' wad lumps are using.

The thing is not that compression is probably superfluous (which I would agree with,) but the format already exists and at least two node builders can create it.

If they were several megabytes or something, maybe I could see the point. As is, all it does is require a lot of extra code and some extra processing time for more or less no benefit.

What extra processing time?
zlib is fast enough for real time decompression and since large node data can indeed be several megabytes and normally compresses rather well the time for decompression is easily made up by the shorter access time to the HD.

Share this post


Link to post
Quasar said:

65534 - 0 + 1 = 65535.

Thanks for the tip :)

What was your question again?

Share this post


Link to post

I just need to know what, if any, node builders support simple unsigned extension of the existing node format :P

If in the future I really want to support stuff such as INT_MAX segs or whatever, I'll consider the existing alternatives. But as Lut said, doubling the limit within the confines of the existing node format is a good start, and it is also in sync with my port's goals, being a simple and natural extension of the existing format.

Share this post


Link to post

GLBSP supports > 32767 segs for a long time. Recent versions have been tested, but looking at the oldest version that I still have (1.91, Sep-2000) it should still work.

GLBSP supports > 32767 sidedefs since version 2.10 (Sep-2004). It uses the following code when loading linedefs:

sidedef_t *SafeLookupSidedef(uint16_g num)
{
  if (num == 0xFFFF)
    return NULL;

  if ((int)num >= num_sidedefs && (sint16_g)(num) < 0)
    return NULL;

  return sidedefs[num];
}

Share this post


Link to post

Maybe I'm confused but doesn't

(sint16_g)(num) < 0
end up causing the function to return NULL for anything numbered higher than 32767?

Share this post


Link to post

What do you want? If you want to play Deus Vult map05 in Eternity you should make following steps:

1) #define NO_INDEX ((unsigned short)-1)

2) Global replacement:
sidenum[index] <comparison operation> -1 =>
sidenum[index] <comparison operation> NO_INDEX

3) Change some typedefs
typedef struct {
unsigned short v1;//e6y
unsigned short v2;//e6y
unsigned short flags;
short special;
short tag;
unsigned short sidenum[2];//e6y
} PACKEDATTR maplinedef_t;

typedef struct {
unsigned short v1;//e6y
unsigned short v2;//e6y
short angle;
unsigned short linedef;//e6y
short side;
short offset;
} PACKEDATTR mapseg_t;

Share this post


Link to post
Quasar said:

Maybe I'm confused but doesn't "(sint16_g)(num) < 0" end up causing the function to return NULL for anything numbered higher than 32767?

Note the first part "(int)num >= num_sidedefs". In other words, if the unsigned sidedef number is not valid (exceeds total number of sidedefs), allow any signed negative number to mean NULL.

The second part is actually redundant. It is enough to just check against the total number of sidedefs (calculated from size of lump).

Share this post


Link to post
Ajapted said:

Note the first part "(int)num >= num_sidedefs". In other words, if the unsigned sidedef number is not valid (exceeds total number of sidedefs), allow any signed negative number to mean NULL.

The second part is actually redundant. It is enough to just check against the total number of sidedefs (calculated from size of lump).


OH I see ;) I wasn't paying good attention to the predicate. But that's ok. Thanks for the info you gave me :)

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×