Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Quasar

Future-safing C code?

Recommended Posts

Graf Zahl said:

BTW, ZDoom also does not use STL but its own implementations of vector, map and string. Why?

Because randy suffers from the worst case of NIH syndrome ever known to medical science? :-p

No I'm joking of course but then again he did rewrite dynsegs for no apparent reason and even made his own png library... what a guy...

Edit, sorry apparently it was GZ who did dynsegs and he had his reasons, see the post below this one. Fair enough. I was posting firmly tongue-in-cheek but I guess that didn't come over so well.

Share this post


Link to post
RjY said:

Because randy suffers from the worst case of NIH syndrome ever known to medical science? :-p

No I'm joking of course but then again he did rewrite dynsegs for no apparent reason and even made his own png library... what a guy...



WTF???


A) Nobody rewrote dynsegs. Besides, I implemented the polyobject splitting code and there were 2 reasons I did not use Eternity's dynseg code:

1) Since it's GPL it was not usable in ZDoom.
2) More importantly, ZDoom uses Hexen's polyobject code which is completely different from Eternity's. To make it short, Quasar's code was just not usable because it relied on differently organized data.

For the other things:

TArray vs. std::vector: This code's roots predate any useful STL support in MSVC, which is the main development tool.
TMap vs. std::map: Read the comment in the source why it was done. BTW, it's NIH code, taken from Lua.
FString vs. std::string: Was done for scripting support. Since STL implementation is not guaranteed to be identical any STL derived class can not be used as parameters to a scripting library because you never know the variable size or internal semantics.

Concerning the PNG code, aside from the fact that it's old I think that a 30 kb source file doing everything by itself is not much worse than a 20 kb source file relying on an external library. Half the stuff in this file is code to handle ZDoom specific stuff that gets added to savegames (which are stored as PNGs with added chunks.)

Share this post


Link to post

Props to the first guy who ports Doom to FORTRAN or at least rewrites part of it ;-)

The column-major format of its tables could lead to some interesting rendering optimizations, at least during certain phases.

Share this post


Link to post

How 'bout a Perl port?

Share this post


Link to post
Gez said:

How 'bout a Perl port?


`$=`;$_=\%!;($_)=/(.)/;$==++$|;($.,$/,$,,$\,$«,$;,$^,$#,$~,$*,$:,@%)=(
$!=~/(.)(.).(.)(.)(.)(.)..(.)(.)(.)..(.)......(.)/,$»),$=++;$.++;$.++;
$_++;$_++;($_,$\,$,)=($~.$«.»$;$/$%[$?]$_$\$,$:$%[$?]",$«&$~,$#,);$,++
;$,++;$^|=$»;`$_$\$,$/$:$;$~$*$%[$?]$.$~$*${#}$%[$?]$;$\$"$^$~$*.>&$=`

Share this post


Link to post

Well, since I did the bulk of Mocha Doom development, in theory now anyone can attempt porting into a non-C language without pointers etc. ;-)

A FORTRAN port would actually be quite close to C as it has pointers (but similar to Mocha Doom in that it doesn't have unsigned types so some caution would be necessary when fucking around with BAMs and screen writes).

Share this post


Link to post

I got burnt by STL vector. I substituted the TArrays with vectors in zdbsp, it worked on smaller maps, but on bigger ones crashed. Like stl::vector had some upper size limit, it just wouldn't grow any larger. With GCC and VC2008 too.

Share this post


Link to post
Gez said:

How 'bout a Perl port?

AFAIK Perl 6 isn't finished yet and Perl 5 doesn't have JIT compiler, so it would be slow as hell. A port to (any) dynamic language would be neat though (I'd port it to Python, but it doesn't have JIT compiler either).

Share this post


Link to post

Javascript could be feasible too. Either directly, or "cheating" by using GWT or a similar toolkit. The game logic would run fast, but unless you aldo used HMTL5 and the Canvas tag, I can't think of any reasonably fast way to update a pure bitmap raster. Might work for vanilla resolution, at most.

And C# would be straightforward too, with some actual advantages over the original Java (e.g. clear-cut unsigned types, lightweight structs).

Share this post


Link to post

There's a lot of attention on making JavaScript fast these days, so it would be OK performance-wise I guess, but the standard library lacks non-browser functionality (file access, sound, networking, etc) so it would be quite a hackjob IMO.

Share this post


Link to post
Grubber said:

There's a lot of attention on making JavaScript fast these days, so it would be OK performance-wise I guess, but the standard library lacks non-browser functionality (file access, sound, networking, etc) so it would be quite a hackjob IMO.

JavaScript is still for embedding; it's never been standardized for application programming. Though there is certainly no linguistic reason for this - the language itself is definitely strong enough to support it.

Applications that embed JS are expected to provide facilities like file IO etc. via exposing native methods.

And BTW, browsing functionality is not exposed through the ECMAScript standard object set either. I have to keep explaining this to people over and over - the DOM is not JavaScript, and JavaScript is not the DOM. The Document Object Model is a separate standard maintained by the W3C for web browsers that embed JavaScript, so that they have one single way of defining the previously mentioned native interfaces to manipulate HTML rendering.

It's like the difference between C and zlib. zlib is written in C and frequently comes installed with it (on Linux) but it is neither a part of the language nor is it even defined by the same standard. The DOM is effectively a library for JavaScript.

The only objects JS provides out of the box are things like Math and Date and regular expressions.

Share this post


Link to post
Maes said:

And C# would be straightforward too, with some actual advantages over the original Java (e.g. clear-cut unsigned types, lightweight structs).

Doesn't the Java compiler attempt to automatically change some classes to be structs?

Share this post


Link to post
david_a said:

Doesn't the Java compiler attempt to automatically change some classes to be structs?


Huh? Not as far as I know -everything is still an object allocated on the heap, even 'static' ones, and there are no lightweight objects with value semantics allocated on the stack -not even in the C++ sense, at least. At least it's not part of the specs, the JVM may choose to do as it wishes.

Now it might be that access to the fields of certain small objects is particularly optimized during copying/cloning/etc. but since there are not even guarantees that an object will have all of its fields neatly arranged in memory like you would expect in C, C++ or Delphi, it's a moot point: it just might be that a particularly well-made VM actually does that, but the spec doesn't say they do.

That being said, I've seen some pretty impressive performance feats from Java -not the least of which is Mocha Doom itself- so it just might be catching up.

Share this post


Link to post

I dug up the place where I read that and they were talking about the escape analysis that was added in JVM 1.6. In theory it can be used to turn heap allocations into stack allocations, but apparently it's currently mainly used for some optimizations to prevent unnecessary locking.

Share this post


Link to post

Here's another C++ question, not really worthy of its own thread.

I want to have a series of global singletons that add themselves into a list at startup (order is not important, and the list head is a primitive pointer which will be properly initialized before global constructors start running).

The question I have is, and I cannot seem to find a 100% clear answer on this anywhere, do I need to reference the objects explicitly somewhere to avoid having the compiler "optimize" them out, even though their constructors have blatantly obvious side effects?

Share this post


Link to post

No. If a constructor of a static object references this object it may not be eliminated. The only thing you have to be careful about is that if such objects use other global objects with a constructor order of execution is not guaranteed so you better avoid such constructs. But what you describe sounds like the list is just a static variable so no problems there.

ZDoom initializes all its CCMDs and CVARs this way, btw.

Share this post


Link to post

In Doom however, surprisingly, many things are treated by reference like "objects", even in the original C code, so under that aspect it fits in neatly with how Java works, and what you lose in most cases are just the static stack allocations of some fixed stuff like visplanes or drawsegs, and the struct-to-struct direct copies that are present in a few spots.

Whenever you load a new level however, the level's structures are entirely dynamic and malloc'd on the heap, as are thinkers (monsters). The only things that really look fuglier when treated as references and take some tinkering to get to work with little perfomance loss is the even and the network code, which rely heavily on reusing the same structs again and again. This takes some extra tinkering to recreate in Java (but I'm almost there).

Also, some stuff like point and line primitives work faster by turning some of their fields into primitive type e.g. instead of going line->vertex1->x I do line.v1x without allocating two extra "vertex" objects for a line (Doom would use pure structs in this place), but other than that Doom proved quite well behaved (no idea if they had planned on making it C++ or even Objective C at some point, as many things map neatly to OO)

Share this post


Link to post
Maes said:

(no idea if they had planned on making it C++ or even Objective C at some point, as many things map neatly to OO)

Carmack has stated in the past that it was the speed of Objective C binary code at the time, and the lack of a good Objective C compiler for DOS, that were some of the reasons this didn't happen. However you can certainly see some of the influence that his contemporaneous use of that language had on his coding style, as it had become much cleaner and structured than it was in Wolf3D, which was written just a couple years before.

The worst code in DOOM was written by other people who didn't have the same experience ;)

Share this post


Link to post
Quasar said:

it had become much cleaner and structured than it was in Wolf3D

Well, it could hardly have become worse than Wolf3D. That source code is the work of a mad scientist: when you look at it, nothing makes sense, and you just don't see how it could even work, but it does anyway, and all the while the creator is cackling, "it's alive! it's alive!"



But anyway, yeah, Doom was written in an OOP paradigm.

Share this post


Link to post

Gez didn't say:
That source code is the work of a mad scientist: when you look at it, nothing makes sense, and you just don't see how it could even work, but it does anyway, and all the while the creator is cackling "I made this source code my bitch! Suck it down!"


FTFY.

Share this post


Link to post
Quasar said:

That sounds a lot more like Romero than Carmack :P


That would make it more consistent with my theory that Carmack did all of the work, the code didn't work, so Romero came in, made a few touches and it worked, therefore making the source code his bitch. And the rest, as they say, is legend ;-)

Share this post


Link to post

Of course it is true that well structured code will *mostly* translate well into other coding systems. It is always the hack-code that gives trouble with going to 00, or any other coding structure for that matter.
It is a measure of the flexibility of the new target coding system, how well it can be adapted to the hack-code. It is sometimes a measure of the flexibility of the programmer doing the conversion too. :)
I wonder about all those "Hack warning" that appear in the source code.
Most of them are not that much of a hack.
In argreement, some things like the wall and sprite draw sorting, taken as a whole, are incomprehensible.

Share this post


Link to post

<CIA-9> Eternity: quasar cpp-branch * r1369 /source/ (121 files in 2 dirs): First compiling C++ revision.
* csonig has quit (Ping timeout: 480 seconds)
<CIA-9> Eternity: quasar cpp-branch * r1370 /source/ (d_player.h p_info.cpp p_info.h st_stuff.cpp): weaponowned must be declared as int[], 
  not boolean[], or an access violation will result in ST_DrawWidgets due to a bad attempt at type punning that only worked in C (thanks, Dave Taylor :P )
We have lift-off!

Share this post


Link to post

That last changelog makes me wonder about 2 things:

1. Is that bloated statusbar code even worth keeping?
2. How will you handle user defined weapons with the current method of handling the player inventory? The way Doom is doing this seems to be totally counterproductive to extensibility.

Share this post


Link to post
Graf Zahl said:

That last changelog makes me wonder about 2 things:

1. Is that bloated statusbar code even worth keeping?
2. How will you handle user defined weapons with the current method of handling the player inventory? The way Doom is doing this seems to be totally counterproductive to extensibility.

1. In the long run, no. There are too many problems with it. But I can't change it right now, there is already too much going on at once.
2. The inventory system will be revamped entirely soon, to work with EDF-defined inventory items.

Share this post


Link to post

I looked, and DoomLegacy still has structures that are required to have the first couple of fields the same so the savegame code does need individual save routines. Anyone else manage to clean that up in their port, better than individual save and restore routines ??

I also said that I do not trust the compiler to align such structures the same. Thought about that a bit. I assume that we are all still using such structures to read the wad files. If modern compilers might stick some alignment padding into a structure to speed up access, does anyone know what would trigger it. Do we all need to put PACKED pragmas on all such structure to future proof our code ??
How are we getting away with this now ??

Share this post


Link to post
wesleyjohnson said:

I looked, and DoomLegacy still has structures that are required to have the first couple of fields the same so the savegame code does need individual save routines. Anyone else manage to clean that up in their port, better than individual save and restore routines ??

I also said that I do not trust the compiler to align such structures the same. Thought about that a bit. I assume that we are all still using such structures to read the wad files. If modern compilers might stick some alignment padding into a structure to speed up access, does anyone know what would trigger it. Do we all need to put PACKED pragmas on all such structure to future proof our code ??
How are we getting away with this now ??

All directly-read structures should have packing pragmas. 64-bit especially will turn into a train wreck if you don't have them, as most compilers default to a wider alignment on that platform. You're getting away with them because your alignment just happens to default properly on the platforms you use.

As for saving and loading, I don't know if you would consider it "better" or not, but Eternity's current interim solution for thinker serialization is accomplished via a combination of virtual setOrdinal(), getClassName(), serialize(), and deswizzle() methods, and a "virtual constructor" idiom accomplished via external factory classes, which are global singletons existing one per Thinker class descendant which add themselves automatically to the ThinkerType parent class's static list of objects as they are instantiated by the runtime at startup. A static ThinkerType::FindType method allows these to be found by name, and then the newThinker() method of the object serves as a virtual constructor for the thinker class to which it corresponds.

So, on write, it looks like:

P_NumberThinkers(); // calls setOrdinal if shouldSerialize() is true
...
for(th = thinkercap.next; th != &thinkercap; th = th->next)
{
   if(th->shouldSerialize())
      th->serialize(arc);
}
arc.WriteLString(TC_END);
On read, it looks like this:
// After reading the next name out of the savegame, in a loop...
   if(!(thinkerType = ThinkerType::FindType(name)))
   {  
      if(!strcmp(name, TC_END))
         break; // we are done!
      else
         // error...
   }
   Thinker *newThinker = thinkerType->newThinker();
   newThinker->serialize(arc); // SaveArchive is situationally bidirectional
   newThinker->addThinker(); // put it in the thinker list
... 
// after all thinkers are dearchived, resolve mutual references
for(th = thinkercap.next ... )
   th->deswizzle();

Share this post


Link to post

Thanks, I suspected that too. I know that one of our developers runs 64bit and apparently his runs succeed, so I suspect that GCC is keeping those structures packed (for the time being).

So one conclustion is that future safing C will include PACKED pragmas on wad structs.
-------
Interesting, full C++ object oriented I/O. DoomLegacy 2.0 probably does it that way too, but I am working on the 1.44 C branch, and will
not go past C99 there. It would be abandoned now if the 2.0 branch was ready.

Maybe some kind of C function ptr to call virtual I/O funcs.
But that still ends up with one I/O func per structure type, and could just as well use the existing switch stmt. It seems that even the C++ solutions end up with an I/O func for every structure or object.
It is also a question of how much work is worth putting into "fixing"
such structure abuses (because in C such abuse seem to be common).

Share this post


Link to post

I had to take care of I/O in a purely OO way in MochaDoom, while still mantaining the flexibility of the universal WAD archive management. Perusing through my code will give you a better idea of how I did it, but let's say that every struct or object that can be read from disk implements an interface called "ReadableDoomObject" (if it's to be read exclusively from disk) or "CacheableDoomObject" (if it's to be deserialized from an arbitrary byte buffer, e.g. a bunch of VERTEXES packed togeter).

E.g. vertex_t will implement CacheableDoomObject, and when loading a level I allocate an array of them and call a deserializing method on the array itself, which in turn calls the unpack() methods inside each vertex_t in the correct order, which takes care of the proper byte endianness, reading of null-terminated strings etc.

And all of that without worrying about the target platform, word alignment or word width.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×