Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

dpJudas

Members
  • Content count

    251
  • Joined

  • Last visited

4 Followers

About dpJudas

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The devil is in the detail. With Doom you essentially can't do it wrong in the game code. No matter how you code a tic function it will always run the same on any computer. In variable-rate games like Descent it is very different. The codebase does have a FrameTime variable and 95% of the code does use it correctly. I.e. the Laser shot moves at the same speed on any computer running Descent. It is the last 5% that is making the game break and those weren't noticed during development because as long as the frame rate is in the ballpark it was tested on everything seems reasonably fine. To illustrate why the 5% are so catastrophic, let's take some examples from Descent: 1) The ship hover effect. This is essentially a sinus curve effect applied over time. But because it has to be done as a velocity force every tick it gets a little tricky to write it correctly for it to be truly frame rate agnostic. What they did instead was to just not look at FrameTime at all and used an advance speed as a constant. 2) The homing missiles. Presumably for performance reasons they only do a hitscan every 8th frame. If the hitscan is successful it compares the angle and if its acceptable it turns the homing missile. Unfortunately the more frames, the less movement difference of the target there will be between each check. The higher the frame rate, the more fast/crazy movement you have to do to make it lose its tracking. 3) Keyboard turn input. Coded exactly like Doom does it, but since the tic rate isn't fixed it doesn't work right. Quake 3 and Unreal Tournament is probably also worth mentioning. Two fine examples of games where even if you do remember all the FrameTime calculations, you still can run into problems when time slices get too small. You have to limit the frame rate in those games too because movement and bullet code begins to act a little odd when each individual movement is tiny enough. As a simplified illustration of the problem, imagine you track time by using milliseconds. Sounds good when you have 30-60 fps, but at 1000 fps the delta time value hovers unpleasantly between 0 and 1. If the engine rounds the number down everything freezes, and if it rounds it up everything runs too fast.
  2. It is about variable rate process vs fixed rate. In Descent and Quake the playsim ticks after each rendered frame. When they developed the games the framerate was somewhere in the 10-75 fps ballpark. Now these games can reach thousands. The code was basically never tested against the case where delta time got so low. Likewise in UT99 the threaded code was never tested on an actual multicore CPU, so it broke when exposed to it. In Doom everything is locked to 35 fps, which the developers at id might even have managed to reach themselves. That's essentially why Doom doesn't behave erratically - it never attempts to slice time into less than that.
  3. At the lowest level it uses basic message sending using UDP where each message can be flagged as important or not. Important messages are guaranteed to be received and messages always arrive in order. On top of that it uses a system where the client playsims starts empty. The level is loaded and only completely static things are spawned. From there it receives spawn messages for each dynamic object in the map from the server. As the objects change, delta messages are sent to keep the clients in sync. Player prediction is performed the same way as normal ZDoom's P2P: P_PredictPlayer is called and it simulates the extra tics the client is ahead. Before the next playsim tic those are rolled back by P_UnpredictPlayer. Just like today's P2P model the input data is also sent to the server and its PlayerPawn is the authoritative object for where the client is located. Since ZScript can pretty much do anything, the basic idea for object replication is that each server side class can be spawned as a different client side class. Unless specified otherwise, objects from the server spawns as the NetSyncActor class on the clients. This actor is a dumb slave object that only replicates just enough to maintain visuals: the position, rotation, sprite, audio and so on. In other words, the client will not be running exactly the same actors as the server. An actor class can customize which actor gets spawned by using a keyword in zscript. The spawned client actor can send/receive messages to/from its server actor. The class can also declare member variables that should be kept in sync automatically. If actors in the client playsim spawns additional objects they will not appear on the server. Object synchronization is strictly server to client. Note: I'm only describing the rough plan here because ketmar asked. I'm specifically not looking for input on how I should do it completely differently because engine XYZ does it that way. A large part of this design was dictated by what is realistic to implement without the freedom you have when starting from scratch.
  4. It would be easier to backport the modern GZDoom renderer to Zandronum than backport the clientserver branch to Rachael's QZandronum branch. Not that there are any volunteers for doing that either.
  5. This is not really how open source development works. The horror concepts from the corporate world, such as Scrum and sprints, do not apply here as there's nobody to fire me when I do not do what your bot tells me to do. :) If I wanted to commit every day to this branch then I would just do so. Needless to say I don't feel like doing that so that's why there's not coming daily (or weekly) commits in. Likewise, having a bot write to the Zand devs that they should commit to this branch won't really solve any of the reasons why they aren't doing so already. It is nice of you to try help any way you can, but adding more process is not the solution. Ultimately what is required is sufficiently experienced developers willing to commit to doing actual work here and the Doom community just doesn't have that right now. If it did, they'd already be working hard on the problem.
  6. Thanks, but there is no point of doing large-scale build testing of anything until it reached a state where at least all the basics have been implemented.
  7. The plan for the branch was to get C/S up and running for ZScript in GZDoom itself. That is, it would get merged into GZDoom. The scope of the branch is intentionally so that it only deals with the C/S aspect and not building any kind of server browser. That would be built on top by downstream Zandronum. This is all highly theoretical anyway because without the finished branch any talk about how to release it isn't very useful.
  8. I have plenty of netcode experience, that isn't really the problem. The netcode on that branch is working perfectly fine at this point - i.e. player prediction is working as intended and so is the object replication. What is missing is that my knowledge of the playsim is very limited. It more less amounts to: ok I know it tics thinkers, something loaded a level and there's spaghetti everywhere in this part of the codebase. This makes it very difficult to plot a plan for how to deal with finishing a level and replicating the few remaining things (sounds, sector/line changes) without studying it further. Unfortunately my interest in the Doom playsim isn't big enough to do that, so that's why the branch is not evolving. I do work on it a bit every xmas when I get bored enough, so who knows maybe one day it will be finished enough to be usable for something. :)
  9. Coming from the guy that posted a mocking picture of Graf. Right.
  10. Sorry Jimmy, I'll play your map any way I want to. Just like some people here use my software in ways I don't like either. Quite fascinated tbh how obsessed you guys seem to be about how others play your stuff. In any case, I don't think we will gain more from this. This really will be my last post in this thread.
  11. As I said, we reached the level of "git gud". Your definition of good gameplay clearly does not match mine. Let's just keep it at that. I like games where I can play them without constantly loading.
  12. Read between the lines. As Graf already hinted, I only cheat when the map designers failed to give me a proper difficulty level. But hey, I guess I just need to get goood! That I can clear the original Doom game on UV with generally 150+ health isn't a sign something else might be wrong, no no.
  13. Okay I'm out of this discussion - we apparently reached the "git gud" level of argumentation now.
  14. Loading a savegame ruins the flow I enjoy from playing shooters, which is why I don't use the non-cheating way. I normally prefer playing things at a difficulty level where I do not die. For the vanilla maps that means I can play them all on HMP, and the maps I know at UV without ever getting close to dieing. For community maps difficult levels are all over the map.
  15. Hehe, that's a matter of perspective. The save scumming way I see on most streams where they continue a 3% health save and just wait for the dices to roll their way until they reach the next health pack could be argued to be avoiding the game just as much. ;)
×