Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content count

  • Joined

  • Last visited

Everything posted by GooberMan

  1. GooberMan

    Is Hexen (1995) a so complicated game?

    The really annoying thing about Hexen's switch hunts is that they already had an in-game solution to make the results much clearer. Items. But instead they went for the more technically challenging (at the time) deferred script execution method and figured showing text rather than providing clear design was good enough. If someone were to make a map patch WAD that changes the switches to be items/puzzle pieces/etc it'll do a lot to clear up some of the game's more annoying design habits. Some people might even say "Metroidvania" over and over in the release thread too.
  2. Besides. It's clear that it will work in Eternity. But I'm definitely aiming for the simpler goal first.
  3. I've been working on a little something in my spare time over the last few weeks. Many years ago, when I was doing 3D floor layouts in Prime Directive, it was really annoying me that the tools were so obtuse. It required essentially a hacker's knowledge of exactly how the feature works to place them. Draw a 2D sector, draw a control sector, set up the lines. Doing anything reasonably complicated with them required thinking in full 3D in a 2D space. The community has long used a baked binary format - the WAD - as both its source and target data, and it annoyed me that GZDoom Builder didn't support a custom source data, allow arbitrary placement of 3D floors, and bake that data down to a WAD during the save operation. Of course, adapting GZDoom Builder or even writing a new editor is a big task. There's a bit of a different solution out there. Enter BSP2Doom. Quake has existed for a while now. There's full 3D editors and everything. And Doom engines have supported 3D floors for a while now. Translating Quake maps to Doom isn't the impossible task it used to be. Honestly, I know community loves building these big sprawling vista maps at the moment, but it really felt like overkill looking at last year's Cacoward winners and every one had a vista screenshot. So let's see what kind of gameplay we can get out of Doom if we start thinking about it with verticality as an actual thing. Besides. I spend my work day knee-deep in Unreal Engine doing everything from bug fixes and optimisations; to finishing half-implemented features the UE team left about. This is relaxing and far less insane in comparison. The workflow goes something like the following: Use another tool in this package - Doom2QWad - to create a texture WAD that Quake editors can read This will also spit out configurations for Quake editors in the future Load up an editor like Trenchbroom Create a Quake level using Doom assets Compile your map. EricW's tools are the gold-standard in the Quake community at the moment. Do the following: Run qbsp. You don't get geometry otherwise. Run light. Your map will be fullbright otherwise. Don't build vis data. This is normally where you'd do such a thing for a Quake map. It's 100% unnecessary here Run BSP2Doom. This will spit out Doom-compatible data from your newly compiled Quake map. Run your normal node builder on the new map WAD. Run your map. The resulting map is not meant to be edited in a Doom editor. Depending on the options given to the BSP2Doom utility, the map could look anywhere between normal and nightmarish when loaded in a Doom editor. The WAD here is exactly a cooked data format. Only ever edit the source format. The code isn't ready for public release. It does already live on Github in a private repository though, so all I'll need to do is flick a switch when it's ready. Regardless, I plan on releasing it with a mapset rather than just dumping it on the community. I have a couple of maps in mind, but I'll also be on the look-out for experienced mappers who want to explore what Doom can do when properly exploiting a 3D playspace. No slaughter, no everything-is-a-trap, no epic vistas. Drop the cliches and let's see what else can be squeezed out of Doom. Also of note: I'm resurrecting Calamity with this tool. The old code I wrote will basically be ignored. Much of the code I'm writing now will make it in to Calamity with an optimisation pass or ten. It's all being written ground-up in D, which is resulting in some really clean code (the UDMF parser is ultra clean; the BSP parser reasonably clean; the WAD parser less so because it's an insane format wholly reliant on the original implementation). Progress Quake BSPs - Geometry shell done. 3D floors about 80% done. Doors and platforms TBD. Lightmaps working but constantly being tweaked. Coloured lighting (ie .lit files) supported. Quake overbrights supported. GoldSrc (Half-Life, Counterstrike) BSPs - Preliminary. Data structures almost identical to Quake BSPs, just need to build some test maps. Quake 3 BSPs - Preliminary. Data structures defined, but need to handle bezier patches before I can attempt to support it. A big question I have is how to support doors and platforms. I haven't decided if I want to bake them in to the map geometry, or use all the polyobject tricks I whipped up a few years back. I will likely try both approaches. Engines supported GZDoom - Working off 3.6.0 as a base, probably works fine on earlier builds Eternity - The UDMF namespace is fully supported at the least, but since the GZDoom implementation heavily relies on shaders this will probably need the software fallbacks I'm coding k8vavoom - Same deal with the shaders. A way to hook in to its own lightmap system with my pre-baked lightmaps will be fab and mostly eliminate the need for shader work. Output formats Map UDMF Textures PNG Doom lumps Packaging Doom WAD ZDoom folder structure Run 7zip or equivalent in your build pipeline if you need a PK3, this is designed to be a scriptable tool and not an all-in-one solution The theory behind translating geometry Things? Entities? Quake style lighting is quite different - how do you handle it? Lighting system implementation details Screenshots
  4. Sweet. Although, to be honest, I'm not sure if that will make supporting Eternity easier or harder. I'll have a think on it.
  5. Does that affect collision as well, or is it just a visual effect?
  6. My preference certainly goes in the opposite direction of the community then. I had to turn bounce lighting off for those most recent screenshots to illustrate the request. The scene looks entirely different with bouncing on. And then there's those massive blocks when you do get a hard edge. I'd rather have higher fidelity light maps - but the maps I want to make also make no attempt to preserve a Quake-, or even a Doom- aesthetic.
  7. The developers have gone on record multiple times saying the SDK will support Quake maps. If you have a link confirming their internal process is different, I'd like to read it.
  8. That's the first time I tried using such small bits of geometry, and I immediately found issues. I can give you a couple more screenshots of some limping-along version though to give you an idea: You can really see the low fidelity of Quake lightmaps here - each texel of the lightmap is equivalent to 16x16 texels of the texture it operates on. Quake 3 lightmaps can work better, but they also come with their own set of quirks. There are also ways to cheat Quake lightmaps, that I only realised after delving deep in to their implementation details. Quake lightmaps are generated entirely in texture space. The physical surface size matters not. So if you were to double the texel density of a surface - say, by scaling the texture - you also get higher resolution lightmaps. I might branch EricW's tools and work out a way to let you scale lightmap generation for BSP2Doom's purposes.
  9. Granted. I can certainly go in to plenty of detail here as to exactly where that complexity is, and why the fundamentals of item placement and object manipulation can be comparable between Quake and modern engine editors (and why that only takes a man-month to get a good quality implementation competitive with Unity and Unreal). But why do more people try to create tools for Doom instead of Quake? Doom is a far more obtuse from a data perspective than Quake, and Doom editing is only in the reasonably good state it's in right now because people looked at the tools and the pipeline and thought they could do better. What if we were still using XWE instead of SLADE? What if Doom Builder was never written? There's certainly still improvements that can be made to the Quake pipeline. Custom monsters are a big one - ZDoom has plenty of "drop in" custom monsters. That still need you to manually construct a DECORATE with all the combined entries, but it's still a reasonably smooth process. There's no particular reason a progs.dat manager that handles new monster definitions and generates appropriate support code can't be done, other than no one wants to do it. So people still have to invoke QC. Trenchbroom also really needs to ditch those CAD views and provide multiple 3D viewports. CAD view really does nothing to help a normal person look at a Quake editor and think "I can use this". Real time lighting preview is certainly quite achievable as well, either by using modern rendering techniques or going old-school and using surface caching similarly to how the original Quake software renderer operated. But here I am writing a tool to improve the Doom editing experience, with a tangential goal of improving the Quake experience by getting more people interested in Quake editing. Interesting aside: Dusk's levels are Quake[1] BSPs converted to Unity. Perhaps we're on the threshold of a Quake resurgence. Quake 2, though, is in an even worse place (and with only one source port worth talking about). Quake 3, well, Googling for various terms only shows me quake3world as a hub and that seems fairly barebones. [1] Well, honestly, I say Quake but I figure GoldSrc would be the better option since each embedded texture contains its own palette.
  10. Purely to keep my sanity, I'm focusing on getting it working in GZDoom first without portal tomfoolery. That certainly sounds like a great suggestion though for optimisation purposes. It almost starts resembling Doom 3 editing in that respect, placing down sectors for the portal culler. To that end: This will be an open source project. I'd be open to the right pull request from the right programmer. Programming in D really isn't that hard, at least the part that will need modifying. The UDMF parser makes heavy use of compile-time introspection and code generation, but the code that actually builds Doom compatible geometry should be quite readable to anyone with a basic understanding of C#. I'm not sure I'll have the patience or interest myself to do all the portal tomfoolery, but we'll see how things go when it's getting closer to release time.
  11. I know a number of people on these forums would agree with your suggestion, but I find it to have a degree of false equivalence. There's several orders of magnitude more people making content with modern engines and modern editors than the Doom and Quake communities combined. There's plenty of theories one could put forward as to why people are more drawn towards Doom rather than Quake editing. Certainly from a pure map-creation perspective, there's nowhere near the variety in Quake bestiary as there is in Doom's bestiary. I'm tempted to make a parody Quake map called "I'm a piece of shit that thinks placing Spawns, Vores and Shamblers everywhere is a good challenge" but the straight-up truth of the matter is that your only other alternative is four types of melee enemy (two with alternate ranged attacks), the Scrag, zombies, and a couple of grunts. E4 is fantastic because it acknowledges that and goes for environmental tomfoolery. Doom mapping is often called simple because it's just like drawing a map on a piece of paper. Quake mapping, on the other hand, is a lot like building with Lego. This reminds me too. I saw a commit go in to the Trenchbroom Github that finally fixes subtractive geometry operations to operate on the world instead of needing to select everything else and then selecting the single brush you want to subtract last. And a new version was released a few days ago. Time to download that. EDIT: Yep. That significantly increases the editing experience.
  12. The side-panel list of entity properties is certainly the better way to go than the DEU-style properties boxes that Doom Builder uses. Trenchbroom's panel however just feels so manual-labor compared to modern engine editors. It can definitely be improved, especially with some docking support so that I can have the panel floating wherever I want. But yes, everything else I basically agree with. Quake editing pipelines more closely resemble modern engines.
  13. That also sounds like a good stress test for Eternity's portals, since a thing would overlap several portals fairly often. Still. Supporting Eternity is well on the backburner for now. Especially if that's true about its slopes. EDIT: Another implementation detail. I'm setting floor and ceiling planes instead of using slope line actions. Having to make a ton of control sectors to make basic slopes doesn't sound like my cup of tea. Something as simple as this would need a control sector:
  14. Actually, this reminds me. The Quake community's editors and documentation are in a far worse state than here in the Doom community. I'm hoping an influx of people interested in Quake mapping would help that out. I'm also not a complete fan of Trenchbroom. Its entity property editor is fairly barebones - anyone used to Doom Builder is in for a shock. And its insistence on not providing standard move/rotate/scale gizmos is frustrating. Doing fine grained movement with camera angles it doesn't expect (ie anything approaching horizontal) often results in my brushes moving out to the distance. I also think that brush/entity selection would benefit greatly from a hierarchy view, not just from an organisational standpoint but to allow simple mass selection and things like group movement by selecting a parent node. I'm half tempted to write a Quake editor myself from here. That's probably one side project too far for me right now. That's the plan. Liquids are a little bit tricky in the BSP - no entity, capping surfaces only, name of the surface special-cased by qbsp and the engine. FWATER1 will be a solid wall by default. I need to double check the Quake source code for implementation details to see if there's anything I'm missing, but at the very least this is going to require user input to define liquid surfaces and my tool will generate Quake-compatible surface names from there.
  15. I've never looked that deep in to Eternity, but that certainly kills the initiative I have to support it. It's not an impossible task. But it sounds like it means making a ton of horizontal map slices. Another implementation detail I neglected to mention is that I do all the heavy lifting in an intermediate format (hence why Quake 3 BSPs are on the cards), so sorting in to horizontal slices and running the code I already have on the results would be the obvious way to do it. Sounds fairly tedious to get up and running though.
  16. If you can build it in Quake, I can translate it. With some exceptions. One map I would like to make is something of a demolished building/ship crash kind of map. Which means doors would naturally want to be sloped. I can't translate dynamic objects to Doom like that. Certainly the portcullis style of doors would demand doors get implemented exclusively with 3D floors instead of polyobjects, but that also immediately rules out many Quake style doors. Moving platforms will work if I embed those ACS scripts I wrote, but there's also some changes I've long wanted to get in to polyobject collision resolution that would stop the desync bugs that are possible currently.
  17. I mean, I'd really like to not use that spotlight hack. The thing that'd make or break your suggestion is how Cacodemons floating across 3D floors will be affected by doing solely sector lighting. I've not seen a Quake poly inhabit anything larger than a 192x192 space, which sure is smaller than many DOOM.WAD/DOOM2.WAD sectors but also isn't really fine grained enough for subtle lighting. So I'd want to split in to further subsectors for an all-sector approach.
  18. Well it certainly sounds like you have the right information and questions to turn the brightmap pipeline in to a backward-compatible, general-purpose lightmap pipeline. Yes, and given that this branch in question is a uniform I would assume a sane rational shader driver to compile two versions and only execute one based on the value of the uniform. But Graf has consistently complained about the insane and irrational drivers and chipsets people run GZDoom on over the years. I wouldn't be too quick to make this assumption without doing some of the following things I'll mention. Exactly how many shader permutations are required when you download GZDoom fresh and load Eviternity in to it? One for the menu and titlepic to get you in. For a good experience, precompile one for standard Doom lighting at the same time. I haven't looked at GZDoom's code there, but I can only assume from your concern that GZDoom precompiles shaders during the initialisation phase. Which is wholly unnecessary and wasteful. GLSL is very definitely a bad specification, so you have to work around its stupidity. The only time you know you need to factor dynamic lights in to the equation is if they exist in GLDEFS and you're loading a map with matching actors; they're placed in maps manually; or ACS/Decorate spawns them. A bit of static analysis will get most cases. Compile as you need them. Store the results, they'll be needed again next map. Dynamic lights can also be factored away to a degree. Unrolling loops is just as valid an optimisation strategy on GPUs as they are on CPUs. More lights than you've unrolled? Back in the fixed function day, multiple lights were done in multiple render passes on a surface if you couldn't squeeze them in to whatever the hardware provided. Deferred shading? Each light is rendered exactly once. GZDoom? This is where you need to start getting creative. If you find a map that needs those 50 shader permutations, then you've found the exception to the rule for the lexicon of ZDoom-compatible Doom WADs and mods. And if it's running on Intel, this is exactly the point where Graf would traditionally screech at his users to get a $30 discrete GPU. But why just provide a single render path for all options? Surely when you're making decisions on what shaders to compile, you can also analyse hardware information and assign it a class that provides it a better-optimised render path - including how to compile shaders. At that point, then you'd start having code that approaches commercial engine quality. This is a disingenuous statement. You know I can read shaders. You also know that by the time it gets to sampling a brightmap, there's already been a few if statements. And there's many more branches after depending on what material GZDoom has decided to assign to a surface. Looking at code and deciding it's wrong is part of my day job. Don't misrepresent things just because you want to prove a point. EDIT: Worth pointing out is that this topic started asking if static surface lighting is feasible in the Doom engine. The answer is emphatically yes. This entire back-and-forth borderline derails the topic.
  19. Front to back for solid geometry. Sprites and midtextures should do back to front to account for blend operations correctly. This ZDoom video certainly implies this assumption is accurate in this case (begins playing at 8:33): There's also no pre-Z pass I assume, but given how Doom renderers generally handle geometry this is probably a bad idea.
  20. Sampling sector ambient + brightmap texel in lightmap generation would deal with that. Let's clarify for those playing at home: brightmaps as defined in ZDoom's material system are additive lightmaps, not blending lightmaps. But they're only additive on sector lighting. Any other kind of lighting operates after brightmaps have been evaluated. Eesh. Just one line with a branch. Modern GPUs are a bit better with branches than the simple predication of old. The modern "threaded" approach that NVidia GPUs take still results in pausing execution units and switching new ones in though, which really isn't desirable. The BRIGHTMAP define is certainly the better way to go about it. No branch because the CPU has already done that once for you, now you already know there's a brightmap and the correct code has been uploaded to be executed thousands of times.
  21. Alright, that edit I did above happened far too quick. Lightmaps are baked data. If you change a texture on your map, that data is stale. If that texture is a brightmap, then you need to sample the brightmap texels in your lightmap generation. There's of course nothing stopping a user setting sector light. Sector light is exactly ambient light in this case. ie a minimum light level. If a user decides that their minimum light level of 192 is fine, then that's correct. If you're really worried, then just like the brightmap above you can sample that in your lightmap rendering. You keep using the word "inefficient". If this were a pull request where I'd be fixing brightmaps to how they should have been implemented in the first place, your U shaped sector would be an irrelevant argument since I'd be rendering lightmaps for convex subsectors. And I'd be doing something that it looks like GZDoom still isn't doing - creating texture atlases to reduce draw calls. And ditching those atlases entirely for a DX12/Vulkan renderer since it's unnecessary. And I'd remove all those if branches from the shader. It's like the renderer wants to be inefficient by default. But hey, if you think adding yet another pipeline to GZDoom to handle something that's basically already there is the way to go, who am I to argue? It's not like I have relevant experience in this department. Not me. I'm just some guy on a forum.
  22. Rubbish. Associating a lightmap value explicitly with a texel versus a surface coordinate does not discount them from being used as brightmaps. Try it. Go on. I'll wait. EDIT: Don't forget to sample the brightmap correctly during lightmap generation.
  23. Sector light zero, eliminating Doom lighting entirely from the equation. Remove that unnecessary desaturate from every brightmap lookup. Oh look now they're exactly lightmaps. Fact. EDIT: The only thing stopping them being lightmaps under your definition is that Graf neglected to give a second set of texture coordinates back in the day. A second set that maps to the brightmap texture exactly will both match the specifications of brightmaps; and allow unique surface lightmaps.
  24. Pedantics. Here we go. You can bring up that word all you want and write out implementation details until the cows come home, but they're still lightmaps. That's fact. But you're right about one thing. Texture coordinates. If you only have one set, either you split your surfaces up by texture limits so that lightmaps are still unique and texture data doesn't require repeating in memory; or you repeat that texture data. It's not the lightmaps that will be inefficient since they're the part that actually needs to be unique. Like I said. They were implemented in a sloppy way for a specific purpose. It's far from the only thing in GZDoom that is guilty of that.
  25. By making some assumptions. GZDoom is very destructive with its data. Getting anything in to the shader is a difficult task. So I've hijacked a little-used render path and modified it to suit my purposes. I'll make a thread soon about what I'm doing. It's not quite ready to be shown to the public. But y'all are gonna love it.