Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

GooberMan

Members
  • Content count

    1560
  • Joined

  • Last visited

About GooberMan

  • Rank
    Scripting Nut

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. GooberMan

    What's your favorite source port?

    I've done way more than that. I wrote a C header parser specifically using wxWidgets as a testbed so that I could automate bindings to other languages. (Protip: Never write your own C header parser, even if the reasons seem sound). But hey, I'm sure support is terrible on TempleOS or whatever it is you use, so you have a point there. Probably.
  2. GooberMan

    What's your favorite source port?

    Number one: There's cross-platform GPL-compatible UI libraries that have been around for 30 years, like https://en.wikipedia.org/wiki/WxWidgets And number two: Expect to see a frontend in Rum and Raisin Doom in the future using ImGui - another cross-platform UI framework that proves your complaint wrong once again. These threads would be so much shorter and not endless if you stopped continually trying to gatekeep everything.
  3. I won't be attempting this until I've simplified the renderer some more. As mentioned above, sprite clipping is next in my sights. The flat rasteriser I wrote will also be the only rendering routine when it moves over to the GPU, no more wall rendering code. Which will also mean simplifying how the wall rendering is set up, most of the time right now is spent outside of the lowest rendering loop just setting up things like wall offsets etc. But needless to say: I don't think the original renderer would be worth porting over to compute code. I know the kind of code that gets all those effects you see in Returnal, the simpler your code the better it is on the GPU. And, as is displayed here, the better it is on the CPU. tl;dr - I've still got work to do.
  4. GooberMan

    >60 HZ broken in the Unity Port

    If you're going to be talking refresh rate issues, you need to be talking what hardware you're running. Doing so without mentioning it is borderline pointless. For example. I just got two different experiences running on my IntelĀ® HD Graphics 530 and my GeForce 960M running on the same i7-6700HQ system. There is some weird timings going on though. Best results have been on the 960M. I've been getting it to run up to 120Hz on my 120Hz panel with vsync on, but it does require setting the frame limiter to off (ie 0) with vsync on. You can turn vsync off, but then you're at the mercy of whatever frame was last rendered so it can lead to jerky results. With vsync on though, it slowly drifts to timings where it gets locked to 60Hz. This may not be the easiest thing to fix, given that Unity Doom doesn't exactly have access to Unity's internal frame sync code to make it behave a bit better in borderless fullscreen/just plain old windowed modes. In every case though, toggling between fullscreen and windowed has reset the behavior. This suggests to me that the Windows scheduler is at least partly to blame. Otherwise, needless to say: The gameplay itself is capped at 35 actually, just like with every other source port.
  5. It initialises GL 3 or higher, yes. It's more for planned features than anything it requires right now. Buuuuuut having said that, I have stated that this is about getting the renderer to run efficiently on modern systems. So yeah, Core 2 Duo, that predates the first i7. What I've done so far should theoretically work just fine on that line of processors, but I'd definitely want to look at the threading performance with how I've got things set up thanks to the way-less-sophisticated cache. And it's an Intel integrated GPU there so it would certainly not have the capability for what I have in mind.
  6. GooberMan

    TNT 2: Devilution (Second beta released)

    Just poking my head in to say that MAP30 does things to my port. Short story: I treat every texture and flat as a composite and cache them in to memory. Cool. But I also convert each texture to every light level before the level starts (saving a COLORMAP lookup at render time), which means I use 32 times more memory than a normal software renderer. End of the day: I chew up 4 gigs of memory. Had to do work on the zone allocator in fact, which was still using 32-bit signed integers for memory tracking instead of size_t. But I got it running at least. Doing the math, that means that a normal software renderer would need 128 megabytes of memory to keep everything loaded all at once. And after a bit of googling, it turns out that there were 486s back in the day that had 128 megs. So if you were super rich/freeloading off a work machine/etc you could have played this map back in the day without incurring constant cycling of textures from memory. I also found an instance of the midtex bleed bug that eluded me for a while before fixing it, which made me unreasonably happy to see it in Chocolate Doom and not Rum & Raisin. Anyway, this mapset is looking like a beast from what I've seen. Looking forward to the full release. Probably won't have much time to do a proper playthrough and report bugs at the moment, as you might gather from this post my main interest is in making sure the thing actually runs.
  7. This is intentionally broken until I finish player interpolation. So here's how it currently works. For all mobjs, the previous and current positions are interpolated according to where the display frame is in between the simulation frame. This does technically mean that you will be guaranteed to see past data except for one out every <refresh rate> frames where a tic lines up exactly with a second. The one exception to this rule is the player. The angle is not interpolated at all. Instead, the most recent angle is used. And for each display frame, we peek ahead in to the input command queue and add any mouse rotation found. This was the quick "it works" method that let me get everything up and running. Here's how it should work. Exactly as above, except do for the display player's movement exactly what we do for mouse look. The full solution will require properly decoupling the player view from the simulation. At this point, it will be the decoupler's responsibility to create correct position and rotation for the renderer instead of the renderer doing the job. There's some more setup I need to do for that to work correctly, but it will also cover all keyboard inputs once working. It may not seem like much to anyone, but I really got sensitive to input lag when implementing 144Hz support in Quantum Break. I tested a few other ports to see how they feel too, in fact. prBoom has the worst feel by far to me, with every other port tried feeling about the same. My intention is to ensure there's basically zero effective lag between when input is read and when it is displayed (ie read on the same frame you render) and thus remove input error considerations from skilled play. There's also one thing I've realised after playing Doom at 35Hz for so long: it cannot be overstated how much of an effect frame interpolation has had on raising the Doom skill ceiling.
  8. GooberMan

    Scientists prove that AAA gaming sucks.

    This game will kick your ass, ya filthy casul. The entire industry is expecting Elden Ring to take the statue next year.
  9. GooberMan

    Scientists prove that AAA gaming sucks.

    Having worked on a BAFTA-award-winning AAA game with no microtransactions and a reputation for requiring a high degree of skill, I feel I should point out that the game likely to win the same BAFTA next year can be described in the same manner. And that the AAA game that's likely to win the same award next year has many players saying how rewarding it is to progress after grinding out better weapons and stats points to get past their progression blocker. You basically can't release a mobile game without the practices highlighted here - unless you expect to make $bugger-all from your game. These practices continue to make money. The gnashing of teeth that they suck is a sentiment that I share, and entirely expected from a community based around a nearly-29-year-old game where satisfaction for most comes from freely-available user made content.
  10. Maybe it's time for a Planisphere update too. Because I've been seeing threads drop below 9ms lately. But maybe more impressively: The Given. At 2560x1200, the original screenshots showed >40ms per render thread back on July 6. That's basically playable in software now. Even more so if you drop it to a Crispy-style resolution. Still todo: Fixing the load balancing code to not pile on the last thread when threads > 4. But I'm chasing something else right now: vissprites and masked textures. I decided to open up Comatose yesterday (runs, but seems to require some Boom line types so you can't leave the first room without noclipping). It's something of a dog on software renderers. Disproportionately on sprite draws. Running -skill 0 shows very reasonable render times. So I wanted to know what was going on. Threw some more profile markers in to see where the time was going. I'm seeing two problems here: 1) Sprite clipping is awful, it does a ton of work just to render nothing. 2) Sprite clipping is awful, it does a ton of work and then when it does draw stuff the rendering routines aren't ideal but aren't really the performance bottleneck here. So I'm currently grokking how sprite clipping works. I already have ideas on what I want to do to it, but I need to understand a few more bits of the code before I can dive in and do what I want with it.
  11. Been going through the column rendering routines to get speed back on UI/sprite/etc elements. You know what that means: It's glitchcore time! Also I guess this shows off frame interpolation and all that. Some issues with SDL and it being unable to detect the highest refresh rate a duplicated display is running at means I can't get 120FPS footage just yet. But it'll come.
  12. Latest release: 0.2.1 https://github.com/GooberMan/rum-and-raisin-doom/releases/tag/rum-and-raisin-doom-0.2.1 Still the same deal as the last release, it's semi supported. I want maps that are limit-removing that break this port so I can work out why and tighten it up. This release has some null pointer bug fixes, and some oddities I encountered when trying to -merge Alien Vendetta instead of -file. The big one y'all will be interested in though: I decided it was well past time I implement frame interpolation. Now it hits whatever your video card can handle. As it's borderless fullscreen on Windows, it'll be limited to your desktop refresh rate.
  13. And also the ARM used in the Raspberry Pi. But I think I'm going to do a deep dive on how to handle division anyway. You can turn on a faster division at compile time on ARM; and there's also things like libdivide. I don't think it'll be a massive win at this point, but it'll still shave a bit of time off. My next focus on ARM though is just what in the heck is going on with thread time consistency. Only the final thread is performing in a consistent manner, every other thread wildly fluctuates in execution time. I can eliminate weirdness with the load balancing algorithm too, based on screenshots. With load balancing: And with no load balancing. Getting those to level out and not fluctuate should let the load balancer work better, and bring the total frame time down.
  14. https://github.com/GooberMan/rum-and-raisin-doom/releases/tag/rum-and-raisin-doom-0.2.0 Release is out. Preliminary support is in for using flats and textures on any surface. Which means Vanilla Sky renders as intended. But it's not perfect. It has rendering routines based on powers-of-two. And MSVC absolutely cannot deal with the template shenanigans I'm doing, it takes half an hour to compile that file now topkek. Clang just does not give a fuck, even when compiling on my Raspberry Pi. Still got some work to do though. Vanilla Sky isn't exactly playable thanks to bad blockmap accesses. Still, this 0.2.0 release is the "break my port with some maps" release.
×