Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
rpeter

About the savegame buffer

Recommended Posts

I was wondering why is all the trouble calculating the buffer size, allocating the buffer, writing savegame data into it and writing its content into a file byte by byte? Why not just write the savegame data directly into the file? Most ports don't do a thing (like compressing) with the buffer after it has been filled by savegame data. Was there an OS consideration why id did this and everyone (almost) just follows it or am I missing something?

Thanks.

Share this post


Link to post

It was, just like so many other things in Doom, the quickest way to do it. Just declare a static buffer, hope that it's large enough and in the end write it out. How could they know that user maps would exceed the limits they deemed sufficient for their own maps?

And don't forget that Doom had to work on systems with 4MB RAM. There was not much room for elaborate schemes to increase memory usage. All had to be tweaked to run on an average system back then.

As for the ports, don't ask me. From a modern standpoint it's a pretty shitty way to do things but apparently nobody seems to think that a change is needed because, after all, it works, doesn't it?

Share this post


Link to post

That's not exactly what I had in mind. Static buffer or dynamic, why use a memory buffer at all, when I can dump the data directly onto the disk? Perhaps mixing different data types are more easy casting memory pointers on the fly? Am I missing some generic programming considerations?

Share this post


Link to post

For example, it will work much more slowly on old systems like DOS. Do you remember autoexec.bat, config.sys and its magical parameters like files, buffers, etc?

You suggest to use
fwrite(save_f, gameskill, sizeof(gameskill))
fwrite(save_f, gameepisode, sizeof(gameepisode))
fwrite(save_f, gamemap, sizeof(gamemap))

instead of
*save_p++ = gameskill;
*save_p++ = gameepisode;
*save_p++ = gamemap;

Such things will work super slow on old systems. Also, do not forget DOOM is cross-platform and we never know in advance about write-caching quality on unknown old systems

Share this post


Link to post

Thanks, you were most helpful.

I'm not against buffering, I'm just having a nightmare with dynamic memory allocation.

Share this post


Link to post

DOOM's save game code is pretty basic by modern standards. If you are working on a port I would advise you consider replacing it completely at the earliest opportunity.

Share this post


Link to post

I have replaced the original code with code based on Boom, calculating the buffer size in advance, my headache is with the allocation itself. On very big maps once in every month Z_Malloc is unable to allocate a big enough chunk of memory (like 300kb) despite that there is plenty of zone memory available. I'm frustrated, I think I just get rid of this buffer until I can resolve allocation problems.

Share this post


Link to post
rpeter said:

On very big maps once in every month Z_Malloc is unable to allocate a big enough chunk of memory (like 300kb) despite that there is plenty of zone memory available. I'm frustrated, I think I just get rid of this buffer until I can resolve allocation problems.



Don't use the zone heap for such allcations. A normal malloc will do nicely as well here. The zone heap is probably the biggest pile of shit that's still plaguing the Doom source.

It was a good idea when there wasn't enough memory to handle the game without caching but even on a 32MB machine you wouldn't need it anymore if you'd take the time to add explicit deallocation code for level data instead of just clearing the heap without bothering with its actual contents (as Doom does.)

Share this post


Link to post

I already use a lot of standard new/delete allocations. One day I'll probably get rid of the zone, but for now I'll just rewrite the saving code this weekend. It'll be a nice practice with iostreams.

Share this post


Link to post
rpeter said:

It'll be a nice practice with iostreams.



Have fun with that abominations... :D
I wouldn't *ever* use them for anything.

Share this post


Link to post

Once I had the idea to make the savegame a text file, readble by a text editor as Quake1 does (I'm not sure about Quake2). This could have the advantage to make an easy "cheat" e.g. change health=1 to health=200 :), easier compatibility between versions etc. I don't know if any modern doom port does this but personally I abondened the idea because (as Graf Zahl said) it works, doesn't it?

Share this post


Link to post

So does fwrite - with a fraction of amount of object code. Iostreams are endless bloat - and I can't even see why this has to result in adding 100k to the executable. Either the code is incredibly badly written or incredibly badly organized.

Share this post


Link to post

So does fwrite - with a fraction of amount of object code

I never used them, but I think it does good caching

ostreams are endless bloat - and I can't even see why this has to result in adding 100k to the executable.

heh. I see you never tried some powerful libraries like Developer Express * :) They add about 1mb to exe every time you use new feature from them like grids, bars, spreadsheet, printer, etc. Who cares.

Share this post


Link to post
entryway said:

heh. I see you never tried some powerful libraries like Developer Express * :) They add about 1mb to exe every time you use new feature from them like grids, bars, spreadsheet, printer, etc. Who cares.



People who need to send these programs over the internet to someone with low bandwidth access or other limitations. This pretty much ruled out wxWidgets in the company I work for as our basic framework - bacause we can't afford to constantly re-send multiple megabyte sized EXEs by EMail.

Not everyone can afford to bloat their programs without any regard to compiled size.

Share this post


Link to post
Graf Zahl said:

Don't use the zone heap for such allcations. A normal malloc will do nicely as well here. The zone heap is probably the biggest pile of shit that's still plaguing the Doom source.

It was a good idea when there wasn't enough memory to handle the game without caching but even on a 32MB machine you wouldn't need it anymore if you'd take the time to add explicit deallocation code for level data instead of just clearing the heap without bothering with its actual contents (as Doom does.)

I take issue with the zone heap being called a piece of shit. It's actually a very efficient form of heap management, and when properly implemented, the performance is difficult to match. Of course, vanilla Doom's implementation had some bugs and short-comings, as did Boom's attempts to fix them. All such known issues have been resolved in later ports, however.

It would be interesting to see some profiling results that match off a modern implementation of the zone heap with system malloc on various systems. At least on Windows 98, the call to Z_FreeTags(PU_LEVEL) using the zone heap vastly outperformed the same when implemented using the C heap. Exiting a level such as nuts.wad makes the difference painfully obvious. When you free memory to the system, the system has a lot of behind-the-scenes management to take care of. When you're just marking some memory you still own as free, then the system could care less.

IMO the benefits of the zone system (built-in debugging tools in particular, which are superior to ones contained in an external library) make it more than worthy of retention. The only real issue with it that I can see is the fact that it only functions on a fixed-size heap. The ability to efficiently expand the total allocation when exhausted, rather than falling back on system malloc, would be an interesting extension, but isn't possible AFAIK.

Share this post


Link to post
Graf Zahl said:

Don't use the zone heap for such allcations. A normal malloc will do nicely as well here. The zone heap is probably the biggest pile of shit that's still plaguing the Doom source.

It was a good idea when there wasn't enough memory to handle the game without caching but even on a 32MB machine you wouldn't need it anymore if you'd take the time to add explicit deallocation code for level data instead of just clearing the heap without bothering with its actual contents (as Doom does.)

I agree. It was useful back when Doom was written, because it was running on 486es and they needed FAST memory allocation to get decent performance. Nowadays there is no real reason to keep it. I certainly don't buy the argument that we should complicate the memory management subsystem to optimise for people playing nuts.wad.

One of the main purposes of the original allocator was in doing caching. It lets you run Doom on a low memory system (4 megs, back in those days) because WAD lumps get loaded from disk on demand. Nowadays, modern OSes will do efficient caching for you if you use facilities like mmap() - back in 1993, they didn't have a proper OS :-) Just like we don't need to go poking VGA registers to draw the screen any more, we don't really need to write our own disk caching system either.

Chocolate Doom actually has an alternate implementation of the z_zone interface that just wraps the standard OS malloc/free functions. I use it for chocolate-server, as it's not necessary to allocate a 16 meg heap for a program that just retransmits packets.

It's also possible to use it to replace the main Doom allocator. When doing this, the game will run, but there are occasional crashes. In fact this was a long standing "bug" for a while, as the SVN builds on the website were being built using the wrong allocator! I can only assume that there are a few bugs hiding in the code somewhere that sometimes overflow the internal buffers. Because of this and for compatibility reasons, I've decided to keep the original allocator - the zone memory system is integrated into every subsystem in the game, and I can't be sure that replacing it might affect compatibility in some way.

Share this post


Link to post

IMHO another advantage of zone memory allocation is the call of Z_FreeTags(PU_LEVEL), where we don't care about which memory we have to free after ending a level, we just do the job with a single function call. On the other hand Z_Malloc could be more sophisticated by selecting a memory block that fits better to the size of the allocated memory. Instead splits the first memory block that finds, regardless its size.

Share this post


Link to post
jval said:

IMHO another advantage of zone memory allocation is the call of Z_FreeTags(PU_LEVEL), where we don't care about which memory we have to free after ending a level, we just do the job with a single function call.



If you want to call built in sloppiness a good idea...

Share this post


Link to post

Graf: You mean garbage collection?

jval said:

IMHO another advantage of zone memory allocation is the call of Z_FreeTags(PU_LEVEL), where we don't care about which memory we have to free after ending a level, we just do the job with a single function call. On the other hand Z_Malloc could be more sophisticated by selecting a memory block that fits better to the size of the allocated memory. Instead splits the first memory block that finds, regardless its size.


That would be a best-fit strategy rather than first-fit, which is not always necessarily better. Why? Because always choosing the smallest block may result in choosing a block which is just a little too large a lot of the time, and then you end up splitting off a lot of small, unusable blocks. This is external fragmentation, which can render a huge chunk of your memory unusable. There will always be some external fragmentation, of course. There's quite a bit with first-fit too, especially in the region of memory that contains most of the cached wad lumps. That's the area of Doom's heap that is the least pretty to look at :)

Share this post


Link to post

Well perhaps a more appropriate term is domain-specific allocation lifetimes. At any rate, I don't see how it's sloppy. It guarantees that you cannot forget to free memory related to a level, and since you must explicitly declare something PU_LEVEL for it to be treated in this way, it's not as if it can happen by accident (unless maybe you are coding drunk, or at 4:30 am). Some modern game engines can have huge problems with memory leaks. Doom's never had that issue.

The only thing that bugs me is that this system isn't taken any further. I've been thinking about whether or not it would be possible to reimplement my Z_Alloca function by using a new cache level, PU_ALLOCA. Seems like it should be doable.

Share this post


Link to post

It's sloppy because it locks you forever to the zone heap's specific implementation and if you have to use something different for whatever reason you are either screwed or in for a lot of work. I'm done writing code with such general assumptions in mind. They *will* come back to bite you in the ass later.

I still remember the work to write proper cleanup code for FraggleScript. It was not fun to do that for something that blindly assumed it could just forget about everything it did at the end of a level.

Share this post


Link to post

Fraggle said:
Chocolate Doom actually has an alternate implementation of the z_zone interface that just wraps the standard OS malloc/free functions.

You can't maintain 100% vanilla compatibility without keeping Doom's heap manager. One reason is that certain demosync-sensitive variables aren't initialised properly and depend on what was on the heap before the block was allocated.

In particular there's whether or not a rising staircase will crush things on it. When EV_BuildStairs creates all of its floor thinkers, it doesn't actually bother to initialise floor->crush. So whether or not a stairbuilder crushes a monster is somewhat random.

The UDS stated that slow stairs don't crush but fast stairs do, which is the model everybody uses now, but I don't believe stairs were ever meant to crush - if they were, EV_BuildStairs would have explicitly coded it. On the other hand there's at least one map in Cyberdreams that assumes slow stairs do crush, so it's really too late to fix it properly.

entryway knows more about this stuff than I do.

Share this post


Link to post
Graf Zahl said:

It's sloppy because it locks you forever to the zone heap's specific implementation and if you have to use something different for whatever reason you are either screwed or in for a lot of work. I'm done writing code with such general assumptions in mind. They *will* come back to bite you in the ass later.

I still remember the work to write proper cleanup code for FraggleScript. It was not fun to do that for something that blindly assumed it could just forget about everything it did at the end of a level.

FraggleScript was never coded properly with respect to the zone allocator or level transitions, and these were some of the major problems that lead me to remove it from EE. Imagine my surprise when I tried to move Eternity TC to a hub layout and could not do so because the FraggleScript engine would corrupt itself as it saved and then reloaded the hub save files (which were also, curiously enough, deleted at program exit!). At any rate, a good deal of those problems cannot be blamed directly on the zone allocator.

I guess you can look at the problem in one of two ways:
1. The zone allocator is a non-essential component which is alternative to system malloc, somewhat dated, and therefore better discarded.
2. The zone allocator is an integral part of the game engine and will be retained indefinitely.

I've taken the second approach with EE, adding significant new debugging tools such as memory op logging, core dump, and heap structure print-out, a new realloc function that can expand and contract blocks rather than always moving them wholesale, and as I mentioned earlier, a portable alloca implementation.

The transparency of the zone heap has also proven critical in the cracking of several bugs.

Share this post


Link to post

I am afraid to say that whether or not you maintain the zone heap, guaranteed compatibility with bugs that rely on zone heap corruption, non-initialization, or other misuse is impossible. Any change whatsoever to the number, order, size, or alignment of allocations and deallocations, or to the size of the zone heap, will by necessity modify the structure of the heap. In the event of corruption, it is impossible to guarantee the defined behavior of the program as a whole beyond the next memory operation (and it will die the next time Z_CheckHeap is run, if not sooner).

So for these types of bugs there are three possibilities:
1. Create a mode that attempts to emulate Doom's zone heap exactly. Sounds like a bad idea to me, and is bound to end in code explosion in z_zone.c -- not to mention it will involve reintroducing the bad design and bugs of the original allocator, which can kill the whole program. It also affects all code outside the zone heap, since you cannot load or free anything that Doom didn't load or free at the exact same time.
2. Try to emulate the problem via analyzing the contents of memory during playback of a certain demo, and put those values into RAM deliberately. Drawback is that it would require every demo suffering the problem to be analyzed, and a large data table with demo hash codes. It also wouldn't fix the issue for general play, only for demo playback.
3. Recognize that it's nearly impossible to deal with this kind of thing and just admit that it falls into that .1% of demos we're never going to sync / levels that cannot be perfectly supported, due to their reliance on purely undefined behavior.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×