Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Sign in to follow this  
Coraline

>35fps methods?

Recommended Posts

I'm curious as to how other port authors implemented an uncapped framerate? Awhile ago I tried studying PrBoom's interpolation methods and it's intriguing, but I didn't get very far with adaption, mostly from a lack of understanding how it worked. I want to try it with 3DGE.

Andrew has spoken briefly about it and it would be like a domino effect (lots to do}, but so far I know that at least everything would need to be interpolated down to thing movement. PrBoom had, I believe, an external bit of code that did most of this, which is the approach I tried taking. Is an uncapped framerate popular enough to warrant the feature (in particular, with GL ports)?

I will admit, I do not know much, but anything helpful right now is...well, helpful. The more I understand how something works beforehand the better I could draft it out. I hope I'm not being pestering as I do have other questions. Thanks everyone. :)

Share this post


Link to post

When finding the distance of graphical objects to rasterized pointers in the code, make sure to check the event loop at each iteration. Then interpolate the frames for each cycle up to the monitor's refresh rate. In the end you'll end up with something that looks sloppy, since Doom's movement code doesn't lend itself well to traditional least-squares approximations, but this can be dealt with using bicubic patches to force each update to refer to the previous frame buffer refresh.

Actually, I don't know.

Share this post


Link to post

Interpolation really is the only way to go.

The game itself is so hardlocked to the 35 fps that trying to change this is going to fail for certain.

Share this post


Link to post
Graf Zahl said:

Interpolation really is the only way to go.

The game itself is so hardlocked to the 35 fps that trying to change this is going to fail for certain.


It would be interesting to see how it would behave if all per-tic speeds were halved/thirded/etc., frame delays were actually doubled/thirded/etc. and the target framerate was actually increased to 70 fps or any other multiple of 35 fps - meaning double resolution player input, per-second actions and all.

The current motion interpolation algorithms use "best effort" strategies with no preset fps target- you don't even know a-priori how much time you will have left before another tic actually starts and how much time you can actually waste by interpolating another frame instead of running game logic. You may barely get 37, 40 or 50 fps (of which only 35 will be "real") but you never know a-priori. Of course, as long as it's pleasing to the eye...

Share this post


Link to post
DuckReconMajor said:

When finding the distance of graphical objects to rasterized pointers in the code, make sure to check the event loop at each iteration. Then interpolate the frames for each cycle up to the monitor's refresh rate. In the end you'll end up with something that looks sloppy, since Doom's movement code doesn't lend itself well to traditional least-squares approximations, but this can be dealt with using bicubic patches to force each update to refer to the previous frame buffer refresh.

Actually, I don't know.


Hahahahah. I always assumed that it's taking the movement code to not update on each frame for movement, but on each tic, like the player's movement.

Share this post


Link to post

I just did a small experiment with Mocha Doom to that purpose (actually faster ticrate). It needs some careful thinking -e.g. defining a "TIC_MUL" constant to use as a timer multiplier, and a fractionary "MAPFRACUNIT" constant defined as FRACUNIT/TIC_MUL to use when dealing with movement speeds (but not distances or dimensions). Also it requires some work to determine when it's appropriate to use altered values and when to leave stuff as it is.

With TIC_MUL =2 The result is eerily smooth, with a feeling quite different from that you can get from an interpolated display. It's possible to record "70 fps" demos, too (heh, TAS speedrunners must glee at the potential for abuse). I suppose that a port altered as such may have its uses in highly competitive/specialized environments (e.g. multiplayer, danmaku mods, bullet time....) but don't expect any regular "35 fps" demos to work with it, regardless of what engine you use as a base.

Share this post


Link to post
Maes said:

a fractionary "MAPFRACUNIT" constant defined as FRACUNIT/TIC_MUL

So better use only powers of two for that TIC_MUL constant.

Share this post


Link to post
Gez said:

So better use only powers of two for that TIC_MUL constant.


Even dividing by powers of two will cause enough accuracy loss in many places, if your idea was to somehow keep full 35 fps compatibility while running at multiples of it. With sufficiently high values of TIC_MUL even the relative "safety" of powers of two will not suffice to prevent hitting fixed math precision limits.

Anyway, it's just an interesting experiment meant to prove that making a faster Doom engine is far from impossible ;-) (if there's enough interest, I'll release the modified .jar). The changes are really easy to keep hardcoded in the main codebase though, and as simple to toggle on/off as changing a constant.

However inflating the ticrate just to enhance smoothness for existing mods & ports isn't worth it, IMO, and visual interpolation is probably enough for most people.

OTOH, if there are new gameplay modes and elements introduced that can actually take advantage of an actually increased engine speed (like ultra-smooth monster or texture animations, very fast firing weapons/sprites/enemies with up to 70 states per second, bullet hells etc.) then yeah, we're talking.

Share this post


Link to post
Maes said:

Anyway, it's just an interesting experiment meant to prove that making a faster Doom engine is far from impossible ;-)



Nobody disputes that. But being technically possible does not mean that the result would be faithful to how Doom feels. Too much of the internal logic depends on 35 fps being 35 fps. Change that and non-subtle changes will occur that make ZDoom's gamaplay alterations irrelevant in comparison.

Aside from that, nothing is really gained. You'd be locked to just another FPS value with all the same shortcomings that occur if your system is too slow. If you want to be flexible interpolation is really the only way to go.

Share this post


Link to post
Graf Zahl said:

Too much of the internal logic depends on 35 fps being 35 fps.


Actually, once you adjust the various tic delays to the new TIC_MUL const, player and monster attacks feels exactly the same. However, some things that play with the minimum FRACUNIT value (e.g. the sliding algorithm) are very tricky to get right, and now as a result Doomguy feels like he's bumped off walls when trying to wallrun (and the final achievable speed was altered).

So yeah, some mechanics are different but mostly due to incomplete implementation. There's no reason why it can't run at 70 fps and feel exactly the same, given vanilla data (with the major exception of demos).

Graf Zahl said:

Aside from that, nothing is really gained.


I think I mentioned some pretty valid points like more states/second, smoother player and AI actions, and generally all the advantages that having an actual playfield that runs at a higher frame rate has. I recognize however that without mods or special game modes that take advantage of those features, these considerations would be purely academic.

And of course a 70 or 105 FPS gameplay logic requires more processing dough, and could or could not be coupled with the usual purely visual interpolation methods. If all want is the illusion that the player or monsters move smoother than they actually do, then visual interpolation is really all that's needed.

Share this post


Link to post

... and all that with today's monitors which only can do 60 fps. So why let the game run faster?

Out of curiosity, how do modern games handle this? After all, interpolation is a proven concept that allows to let the game engine run at a fixed but relatively low speed that's independent of the display hardware.



BTW, interpolation of monster movement sucks so badly that ZDoom has an option to turn this off. Interpolation of sector plane movement and texture scrollers is much, much more important.

Share this post


Link to post
Graf Zahl said:

... and all that with today's monitors which only can do 60 fps. So why let the game run faster?


You know how many other -more or less related- can of worms this statement can open, right? :-p

Graf Zahl said:

Out of curiosity, how do modern games handle this? After all, interpolation is a proven concept that allows to let the game engine run at a fixed but relatively low speed that's independent of the display hardware.


Probably in a very similar "best effort" strategy, since it's not guaranteed that the hardware will be able to keep up with neither a fixed engine rate, nor with a decoupled -but arbitrarily set- display rate. For all you know, you may run the engine at X ticks and have the CPU power to display either X+1 or X+10 depending on a gazillion of factors, and some frames may be too close in time while others may be too apart. This rules out solutions like "taking the average of positions between two frames", and results in non-synchronous frames.

It's one thing being able to pump a constant 70 fps for a 35 fps engine, and another thing being able to pump maybe 36, maybe 55. I don't get how the latter can be considered particularly pleasurable, especially when the visuals may become mismatched with state. On arcade games they simply threw enough hardware to the problem to make this a non-issues. On consoles they simply lower the ante to guarantee near-consistent performance no matter what (was particularly true of classic 8-bit and 16-bit games, much less so with post PSX stuff and the unpredictability of 3D scenes).

Share this post


Link to post

There will always be factors beyond reasonable control for a real-time game engine on any modern platform. These range from disk seeks to cache misses and from network latency to display frame syncing. All of which add up and combine to a relatively volatile environment. This by its very nature works against the idea of a fixed-rate, incredibly tight game loop which handles all game duties in the one thread.

These temporary 'hiccups' result in minute but easily perceivable stuttering in games.

The reason a decoupled refresh and subframe interpolation is used is because the software can compensate for such hiccups by simply adjusting interpolants. If that last cycle took longer than expected; simulate fewer frames this time around.

Consistency of framerate is one of the single most important factors in games. Without it a user cannot immerse themselves in the experience as deeply.

Share this post


Link to post
DaniJ said:

The reason a decoupled refresh and subframe interpolation is used is because the software can compensate for such hiccups by simply adjusting interpolants. If that last cycle took longer than expected; simulate fewer frames this time around.


With the important detail that in most Doom ports the rendering thread is not decoupled at all from the main thread. Actually, there's no "rendering thread" to speak of at all: it's all one big serial spaghetti, which goes somewhat like this:

  1. Get accumulated player/network player input (this actually is done in a "pseudo concurrent" way by sandwiching those NetUpdate() commands between the various rendering phases, ever since doom.exe)
  2. Run AI.
  3. Draw stuff (and do some "so quick it appears concurrent" net/input stuff.
  4. Update sound status.
AFAIK no port does ANY of the above truly in parallel (with the exception of MochaDoom and a special build of prBoom+ that have a parallelized renderer, but the concurrency is internal to the renderer alone, it doesn't overlap any of the other main loop phases).

So if even one phase "goes bad" it can affect all others (within certain worst-case limits of course). I concede some exceptions for the sound (if a sound server of sorts is used, or if an interrupt mechanism can "chime in" to do its stuff at any time), and maybe OS-level network drivers and the such, but that's about it.

This is most noticeable if you load a map that is AI/rendering heavy and stalls the rest. Now, prBoom stuffs the interpolation code somewhere inside TruRunTics, and if it decides that the previous phases left enough time, it uses some juju to interpolate an unprecised number of frames. That's hardly "decoupling" with the traditional sense. It's more like a tightly coupled best effort ;-)

Now, if there was indeed an interpolator thread that did its business -or at least part of it- in parallel with most of the main loop and thus was somewhat able to anticipate the on-screen results before even the "normal" rendering finished, then yeah, we would be talking about decoupling proper.

Share this post


Link to post

My response wasn't about DOOM or source ports thereof. Graf asked why do modern games do it and I answered :-)

However I will say that Doomsday has been using multiple concurrent threads for years now. Current releases use independent threads for sound effect playback monitoring, master server communications, network listening and map/resource/texture loading and possibly some other stuff I am forgetting about.

Share this post


Link to post
DaniJ said:

Doomsday has been using multiple concurrent threads for years now. Current releases use independent threads for sound effect playback monitoring, master server communications, network listening and map/resource/texture (up)loading and possibly some other stuff I am forgetting about.


That falls neatly into the space I allowed for secondary stuff like sound/networking/non-dependent stuff that could be serviced concurrently without really affecting the gameplay or risking breaking any delicate data dependencies. Actually, stuff like e.g. asking for a resource and then waiting for it while serviced on a separate thread is more of an "async task", which are more and more popular on certain SDKs (e.g. Apple's Grand Central Dispatch, Java's Futures and Callables, and .NET's async/await pattern), with more or less syntactic sugar. Sure, it helps using the extra cores somewhat, but they are still blocking tasks, for the most part. A background network or UI listener thread is another thing.

Sometimes, such parallelism may just be a side effect of the libraries used (e.g. sound mixers may effectively run on different threads or even different processes) so it comes "for free" but that has nothing to do with how the game logic, rendering and interpolation works: it's more of a case of the AI and renderer doing their thing, and the interpolation code is given maybe a remaining fraction of a tic to work out displayable differences and render them. Maybe more than once per tic, too, or maybe none, always trying to catch its breath ;-)

OK, maybe modern games are more "concurrent" in the proper sense, but even then I wouldn't put my hand over fire for it: the way they work and the way game designers learn to do stuff is one big serial, viciously data-dependent snake ;-)

Share this post


Link to post

Indeed, I wasn't suggesting that these were hugely amazing advances but it is true concurrency.

I think you would be rather surprised with just how concurrent some of the big game engines are nowadays.

Share this post


Link to post
DaniJ said:

Indeed, I wasn't suggesting that these were hugely amazing advances but it is true concurrency.


Hmm....not if you use the term with a numerical crunching junkie ;-)

Unless we're talking about splitting up a massively parallelizable and heavy task into as many threads, such as computing the multiplication between two 1000x1000 dense matrices, that ain't true concurrency for the hardcore supercomputer user. Or at least not worthwhile concurrency.

It's merely having the cores picking into a cookie jar of micro-tasks most of which consist of waiting on events or servicing trivial I/O.

Share this post


Link to post

Clearly our definitions of "worthwhile concurrency" differ in that case. I believe there can be many good reasons to parallelize outside of concurrent number crunching. A classic example being the decoupling of refresh for real-time feedback of progress in worker processes (loading animations).

I do agree that this doesn't meet your definition :)

Share this post


Link to post
Maes said:

OK, maybe modern games are more "concurrent" in the proper sense, but even then I wouldn't put my hand over fire for it: the way they work and the way game designers learn to do stuff is one big serial, viciously data-dependent snake ;-)



That's probably because any rendering thread needs the data from the game thread and also needs to ensure that the data doesn't get changed while working on it.

Proper synchronization will often cost more than multithreading will give. For things like sound mixing/streaming and similar tasks that's not the case so they'll always be the first thing to be done in separate threads.

Share this post


Link to post

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  
×