Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Martin Howe

Neural nets sort of generate Doom maps but not really

Recommended Posts

5 hours ago, InsanityBringer said:

I wonder if this is a sign of how they're getting this output. Take the line image and generate lines from each pixel of it

If so, they need to learn about vector graphics...

Share this post


Link to post
15 hours ago, Gez said:

If so, they need to learn about vector graphics...

I don't know how much exposure you've had to academics, but they don't necessarily write the best code 🙂.  Especially if they're working on a deadline for a paper...  I had a friend go to CS grad school and the software everyone wrote was basically a collection of hacks that met the bare minimum definition of working.  Textbook definition of "proof of concept." 

Share this post


Link to post

Building an AI that creates good, fun, attractive Doom levels should be possible, but there are some serious hurdles:

  • A Doom level is big, with lots of areas, so a good first start is to be able to build good, fun, attractive rooms. Ok, so, what is a room, mathmatically? A big area with one or more entrances or exits? That could describe just about any space. Yet, it doesn't really describe an outdoor level. Ok, so, maybe you have to define a few different models for what a room is, and how to join it to another room.
  • What is a good level? Maybe, you could say that a good level is a level where the player can navigate to most places, there is a significant challenge, and plenty of resources to get the job done. And the keys are accessible before the doors they unlock.
  • What is a fun level, mathmatically? I have no idea how to answer that.
  • What is an attractive level, mathmatically? Again, no idea. Programs like Oblige have themes - textures known to work together. But what about beautiful architecture? I think WadC might be helpful here.

For a neural net to work, it needs training. That's where the idGames archive comes in. But, it needs more than just a pile of levels. It also needs to know how to answer the above questions accurately. What part of each level is beautiful? Is this section fun? Is this a good level? To do that programmatically, you need to be able to score levels, and sub-sections of levels with accuracy, otherwise you're not training the AI in a meaningful way. And, honestly, I don't know how to program a function that outputs a value for how beautiful a structure is.

 

The AI needs to know that this area, with 4,000 linedefs, and 1,800 sectors is 86% beautiful, but this other area with similar specs is an ugly mess, in a way that most people would agree. Now, if you could get a bunch of people to play a ton of levels, and have them comment on the fun and beauty amount of each and every area they encounter, you would then have some data that could be useful for training an AI. But you need tons and tons of this data, before you can expect to get meaningful results.

 

If you were able to train your AI with enough data, it could then estimate how beautiful and fun new areas will be. Next, you'd need to teach your AI how to draw levels, in a way that it had the tools to possibly build its own beautiful fun level. It could then try thousands of variations until its "beauty detector" decided that one of its creations was, in fact beautiful.

Edited by kb1

Share this post


Link to post
14 hours ago, david_a said:

I don't know how much exposure you've had to academics, but they don't necessarily write the best code 🙂.  Especially if they're working on a deadline for a paper...  I had a friend go to CS grad school and the software everyone wrote was basically a collection of hacks that met the bare minimum definition of working.  Textbook definition of "proof of concept." 

Sometimes it's appropriate to "just get it working." Sometimes you have to see it in action, to know how to build it right. But, yeah, you need experience to do any better. Their project was too ambitious to expect results in a term. I think it would take a couple years to get any basic decent results for such a massive complex goal.

Share this post


Link to post
14 hours ago, kb1 said:

*Long insightful post on the difficulties of an programming an AI to build Doom maps*

The definition of Awesome, but Impractical.

Also, more contentious works or themes would create serious difficulties if they were to be included.

Should a level of detail like in KDIZD or the Ultimate Torment and Torture be used or a level like in ID's levels be used? Or somewhere in between?

Should it be 2.5D or actual 3D? Depends entirely on the targeted port.

What would constitute a simple nonlinear map versus a sandbox map? That one's a little clearer, but still blurry sometimes.

Should specific focus be put on indoor or outdoor areas? This one really depends on the setting, but it can vary.

Where is a "boss battle" warranted in a given map? Where is it not? Again, varies.

If there are puzzles, should they be simple or difficult?

Same thing with mazes.

Too many of these things rely on human preference to get a concrete answer.

 

I'm not good for insight, but I am good for asking questions.

Share this post


Link to post
34 minutes ago, Aquila Chrysaetos said:

Should it be 2.5D or actual 3D? Depends entirely on the targeted port.

Omg really? Vanilla is 3d first off and second no less 3d than gzdoom. Also why would you train an ai to work with gzdoom before it can work with vanilla? Even people shouldn't be doing that.

 

Also, boss battles, mazes? That ai just failed its task simply from being allowed to consider these failures of design tropes. Like with people though, an ai would need to learn to make regular doom maps first before making non-doom maps in doom.

Share this post


Link to post

What you need are custom loss functions (a number that goes up the worse something is), which is perhaps a slightly easier way to think about it.  What makes a level fun and awesome?  I dunno.  What makes a level ugly, bland, or boring?   I could probably come up with some coarse heuristics for that.  Neural networks work by continuously tweaking values to try to minimize the loss.

 

I'm not saying this is easy but it somehow seems more tractable to define a "good" level as one with a minimum amount of bad qualities.

 

We attempted to make a scoring system for how "fun" a Wolf 3D level is but it is (unsurprisingly) not very good at predicting that.  Some trash gets penalized, but seemingly just as much gets promoted based on some quirk of the scoring.  Switching that around to purely penalizing bad design features may have gotten us better results.  That might be fun to explore since I have a few thousand Wolf 3D levels to play around with and fair amount of them are absolute junk.

Share this post


Link to post

I read (skimmed) the paper, and was surprised (given @Linguica's comment) that they did actually generate maps. Or maybe I missed a nuance. Anyway I mailed them with some questions, also requesting the outputs.

Share this post


Link to post

As with any learning, you'd have to start from the ground up. Even if a designer came up with a megawad that combined all the best qualities of Requiem, Deus Vult, Swim with the Whales, The Sky May Be, Alien Vendetta, and Suspended in Dusk (no, I don't even know what such a megawad would look or play like), that designer likely did not just open an editor one day and start making levels with that level of skill and complexity the next day. There would be a learning curve.

 

So it would be with an AI learning to make Doom levels. The "advantage" to the AI is that you could feed it enormous data sets (i.e., the entire idgames archive) and let it chug away, so the "learning" might take place more quickly than a person.

 

I suppose files could be grouped based on rating and so the AI could be fed the 4 and 5 star wads (these are good, emulate properties here), the 3 star wads (these are average, they can be emulated), the 2 star wads (try not to emulate these), and the 1 star wads (do not do the things you find in here). From that, the AI could begin generating levels. There would need to be some feedback mechanism, so that the AI could further refine its level-making rulesets. Eventually, you'd probably end up with a decent megawad.

Share this post


Link to post
43 minutes ago, Jon said:

I read (skimmed) the paper, and was surprised (given @Linguica's comment) that they did actually generate maps. Or maybe I missed a nuance. Anyway I mailed them with some questions, also requesting the outputs.

Claimed they generated maps. I see no proof beyond the tiny picture within a larger picture of a Doom screenshot. Unless that was the map and it sure looks like an official map. I'd like to know how these maps were made. A robot working at a PC with WadEd 1.83? An image of a map?

Share this post


Link to post
2 hours ago, Fonze said:

Omg really? Vanilla is 3d first off and second no less 3d than gzdoom. Also why would you train an ai to work with gzdoom before it can work with vanilla? Even people shouldn't be doing that.

But the B guy on that popular Youtube channel taught us what to believe like he's some sort of teacher that went to college to be an educator.

Share this post


Link to post
1 hour ago, Jon said:

I read (skimmed) the paper, and was surprised (given @Linguica's comment) that they did actually generate maps. Or maybe I missed a nuance. Anyway I mailed them with some questions, also requesting the outputs.

 

17 minutes ago, geo said:

Claimed they generated maps. I see no proof beyond the tiny picture within a larger picture of a Doom screenshot. Unless that was the map and it sure looks like an official map. I'd like to know how these maps were made. A robot working at a PC with WadEd 1.83? An image of a map?

You guys might have missed their github repo: https://github.com/DanieleLoiacono/DoomGAN

 

There's a level there along with a link to a YouTube video of it.  The overall shape certainly seems to be from their output but the mechanics of how the level was created are unclear.

Share this post


Link to post
2 hours ago, david_a said:

 

You guys might have missed their github repo: https://github.com/DanieleLoiacono/DoomGAN

 

Where did the article mention a github?

 

The article bluntly closes with "Both are currently prototypes and are not available for players to test."

Share this post


Link to post
29 minutes ago, geo said:

Where did the article mention a github?

 

The article bluntly closes with "Both are currently prototypes and are not available for players to test."

The awful BBC article?  I didn't even bother reading it.  I found the github page earlier in the thread.

Share this post


Link to post
9 hours ago, Fonze said:

Omg really? Vanilla is 3d first off and second no less 3d than gzdoom. Also why would you train an ai to work with gzdoom before it can work with vanilla? Even people shouldn't be doing that.

 

Also, boss battles, mazes? That ai just failed its task simply from being allowed to consider these failures of design tropes. Like with people though, an ai would need to learn to make regular doom maps first before making non-doom maps in doom.

Agreed. If I was going to attempt a project like this, I would do everything I could to minimize the "search space" (available toolset), to reduce the sheer volume of possibilities available to the AI, so that I might get meaningful results within my lifetime :)

 

8 hours ago, david_a said:

What you need are custom loss functions (a number that goes up the worse something is), which is perhaps a slightly easier way to think about it...

Yes, using "penalties" works better than "rewards", because you can define an endpoint: 0 = perfect output. Zero reward is much less useful, as 1 billion reward is. And, it's much easier to find and score faults than it is to identify and quantify beauty, or "funness".

 

7 hours ago, Pegleg said:

As with any learning, you'd have to start from the ground up...

So it would be with an AI learning to make Doom levels. The "advantage" to the AI is that you could feed it enormous data sets (i.e., the entire idgames archive) and let it chug away, so the "learning" might take place more quickly than a person.

 

I suppose files could be grouped based on rating and so the AI could be fed the 4 and 5 star wads (these are good, emulate properties here), the 3 star wads (these are average, they can be emulated), the 2 star wads (try not to emulate these), and the 1 star wads (do not do the things you find in here). From that, the AI could begin generating levels. There would need to be some feedback mechanism, so that the AI could further refine its level-making rulesets. Eventually, you'd probably end up with a decent megawad.

This is a good starting point. But, there are some real problems with the approach:

  • You get 1 rating, with only 5 choices, per person, for a whole level (or even megawad). This is far too coarse an indicator, when you have dozens of rooms of all different styles. Some have massive structures, and some are rectangles joined by hallways. With such a course indicator, you might need to feed the AI millions, or even billions of rated WADs, for recognizable patterns to emerge.
  • With the ratings system, you don't know if a level is getting a 5 because it's nice looking, or because it's a blast to play. And, for a 9-level wad, maybe you're getting a 1 star for E1M1, even if the other levels are wonderful. Maybe the player quit after E1M1, without looking further.
  • Some ratings cannot be trusted. Some people give a 1 star because they don't like the author, regardless of level quality. I'm sure it's happened before.

AIs don't get anywhere unless they are able to identify (or, at least experience) patterns. Certain repeated exposures imprint a bit upon the AI's memory.

 

Think of teaching a baby how to speak. You could read a thousand novels to the baby, and if you read unemotionally, that baby would probably not learn much at all: "This person makes sounds. I'm bored." This parallels with using the rating system for teaching the AI. There's too many words in those novels, like there's too much going on in the levels, and the unemotional reading is like the lack of granularity in the scoring system.

 

But, bounce Baby on Mommy's lap, feed Baby, point to yourself and say "Mommy", and, pretty soon, Baby associates the word with his Mommy. And along with the word, are all the emotions of happiness, warmth, love, full tummy, etc. All of that input makes a huge impression. This provides high context for a single word, or fine granularity. This word scores as a "good" word, with plenty of good feelings and emotions, as well as plenty of audio/visual info.

 

This fine granularity is what is needed to teach good Doom level building to an AI. The AI needs to know how beautiful, good, and fun things are, on a line, sector, and thing basis, as well as an overview of an entire room as a whole.

 

A really good start is if you could make the following functions work:

  • SectorBeautyScore(sector_number) - This function outputs a number that accurately describes how beautiful a particular sector is. 0 = the most beautiful sector even seen, and the higher the number, the more ugly the sector is.
  • SectorFunScore(sector_number) - Same as above, but describing fun factor, with 0 being the most fun you've ever had
  • RoomBeautyScore(sector_list) - Works like SectorBeautyScore, but for a list of sectors comprising a room, however that's defined
  • RoomFunScore(sector_list) - How much fun are all of these sectors that make up a room
  • LevelBeautyScore() - Entire level beauty score
  • LevelFunScore() - Entire level fun score

If you can make these function work "accurately" (as far as humans are concerned), you can make your AI work to create nice levels, guaranteed. In fact, if you had working versions of these functions, you wouldn't need an AI - a genetic algorithm would do the job, just randomizing levels until these functions returned very low penalty scores (lower is better).

 

What you need the AI for is for it to learn how to accurately compute those functions. So, you have to train it to do those 6 functions (at least).

 

So, to train this AI, you need a human to run through the levels of the idgames archive, rating the level's sectors and rooms. Maybe "sector" is not the right level to use. But it *is* the minimum granularity you need, if you want to build an AI that draws sectors beautifully. Is this a massive, unrealistic task for humans to do? Yes, absolutely. But it could be streamlined. 100 people could run through levels with a custom port, "shooting" beautiful areas with the BFG, kinda nice areas with the plasma gun, decent areas with the shotgun, and ugly areas with the pistol. The port would output and upload this data to a master server. That would be relatively streamlined, yet still unbelievably unrealistic.

 

It simply cannot work (well) without this level of detail. Sounds like a fun project (of which I have no time for.)

 

Share this post


Link to post
7 hours ago, Jon said:

Are you folks versed in how GANs work?

Yes. 

 

Edit: I mean, at a high level. The one I made didn’t produce any useful output. They are by all accounts extremely difficult to train, and my understanding of deep learning isn’t deep enough to intuitively know what to tweak to improve it. 

Share this post


Link to post
16 hours ago, david_a said:

Yes. 

 

Edit: I mean, at a high level. The one I made didn’t produce any useful output. They are by all accounts extremely difficult to train, and my understanding of deep learning isn’t deep enough to intuitively know what to tweak to improve it. 

 

Sorry for my snarky post. I have no doubt you know what  you're talking about here (ever since you recommended that academic procedural content generation book to me) -- but some of the other comments above seem to be quite misinformed on how these things actually work.

Share this post


Link to post
6 hours ago, Jon said:

 

Sorry for my snarky post. I have no doubt you know what  you're talking about here (ever since you recommended that academic procedural content generation book to me) -- but some of the other comments above seem to be quite misinformed on how these things actually work.

I didn’t even read it as snarky; I really don’t know the backgrounds of anybody else here.

 

I knew absolutely nothing about machine learning before I set off on my Wolf 3D odyssey maybe 6 months ago. As it turns out, trying to go from “nothing” to Generative Models (the fringes of what’s possible in ML today) needs more than 6 months to fully comprehend 🙂. I have a deadline of Thursday this week for a talk I have to do at a community meetup about it. For the talk it’s enough that I understand it at a high level even though none of my attempts worked, but it might be fun to spend more time afterwards improving it with the help of the community. 

 

What I’ve tried so far for making Wolf 3D levels:

* Variational Autoencoders: Not sure this is at all appropriate for a Wolf 3D level. It seems more useful for continuous data like colors; Wolf 3D maps are very discrete (what’s halfway between a door and a wall? It doesn’t really make sense). 

* Generative Adversarial Networks: There’s about a billion ways to tweak these and I just made one attempt adapted from some sample code. I suspect the discriminator is vastly better than the generator so it’s not figuring out how to move towards a real-looking map. Technique has some potential I think. 

* Long Short-Term Memory / Recurrent Neural Network: I think this is pretty promising but it needs to be fed 2D context, not just 1D. 

 

One issue common to all these that I’ve found is that my data had some REALLY terrible maps in them. I had already removed a lot of trash, but there were still stuff like 2-tile placeholder maps in my data set. I don’t have time to re-run everything again before the talk with that stuff removed. 

Share this post


Link to post
On 5/13/2018 at 9:06 AM, david_a said:

...For the talk it’s enough that I understand it at a high level even though none of my attempts worked, but it might be fun to spend more time afterwards improving it with the help of the community...

 

What I’ve tried so far for making Wolf 3D levels:

* Variational Autoencoders: Not sure this is at all appropriate for a Wolf 3D level. It seems more useful for continuous data like colors; Wolf 3D maps are very discrete (what’s halfway between a door and a wall? It doesn’t really make sense). 

* Generative Adversarial Networks: There’s about a billion ways to tweak these and I just made one attempt adapted from some sample code. I suspect the discriminator is vastly better than the generator so it’s not figuring out how to move towards a real-looking map. Technique has some potential I think. 

* Long Short-Term Memory / Recurrent Neural Network: I think this is pretty promising but it needs to be fed 2D context, not just 1D.

Sounds like you've touched on a lot of stuff. Is your talk going to be recorded? Seems like it would be interesting.

 

Does your goal lean more towards the AI, or more towards making a Wolf map maker?

 

Making maps is hard, even for humans. Identifying what makes a map good (or bad) is hard, especially in a mathematical, numerical way. Before an AI can make good maps, the concept of "good" needs to be defined really well. I believe that this is such a major problem that I would approach the problem from the other end, and work my way towards the eventual AI solution. I would concentrate on really good, domain-specific tool primitives: Very fast wall and room generation, extensive theme lists (what walls/things look and work well together), fast and accurate pathfinding, and the largest possible collection of "scorers" (discriminators?) possible.

 

Lots of simple things, like:

  • concave shapes are appealing, jagged rooms are not.
  • color counts should not exceed a certain threshold (to avoid visual clash)
  • too much/not enough ammo/health/enemies, per encounter, since map start
  • line-of-sight angle considerations
  • min/max map size

I might also build some tools for statistics, along the same lines as those things I learned how to score. A database could be built by analyzing "good" levels. And for each statistic, I would track estimated player health and ammo, and percentage of map completed, because those factors can and should affect what comes next in a level. And, these statistics may hold new info you didn't expect, that an AI might pick up on. For example, maybe "kitchens" have a more rectangular shape, maybe usually contain certain items, probably have a subset of enemies, and may have different exits, color schemes, etc.

 

Once all of those tools were built, debugged and optimized, and you now have a large set of collected data, with more tools that can easily do pre-built queries, you *now* are ready to experiment with some automation. Maybe your database drives your scoring systems. Or, may it serves as blueprints for level fragments. Massive iteration count with pseudo-random tweaking, combined with tossing badly-scoring areas should allow for some interesting constructs.

 

I believe the most important component of all of this is a robust scoring system. Of course, this is also the most difficult component to build!

 

Anyway, I guess you can tell that I find automated map generation to be a fascinating topic. I don't mean to sound particularly knowledgeable here - these are just some ideas. I have experimented with quite a few optimizing algorithms, as well as 2D map generation, in various forms, but nothing as ambitious as this. I'm just offering some thoughts here. Glad to hear that you want to continue after your talk. Good luck, with your talk, and your further experiments.

Share this post


Link to post
On 5/11/2018 at 11:31 AM, david_a said:

We attempted to make a scoring system for how "fun" a Wolf 3D level is but it is (unsurprisingly) not very good at predicting that.  Some trash gets penalized, but seemingly just as much gets promoted based on some quirk of the scoring.  Switching that around to purely penalizing bad design features may have gotten us better results.  That might be fun to explore since I have a few thousand Wolf 3D levels to play around with and fair amount of them are absolute junk.

Sorry - dbl post.

 

I meant to quote this, david_a, as I knew that you had already done some work in this area. Not sure what you came up with, but I do agree with using a pure penalty system. This gets interesting after you devise multiple things to penalize, many of them being in conflict with each other. It starts to become important what values you use for penalties. Not the actual numeric value, but, rather, the magnitude, as it relates to other penalties. If balanced carefully, you end up with a system that gracefully manages conflicting requirements, which is usually *very* difficult to get a computer to do. If you use double floats, you can get your system to simultaneously consider massive penalties, and also care about minimizing tiny annoyances.

 

You really want to employ as many checks as you can think of, and let them return wide variations of penalty. I'd like to discuss this further, sometime down the road, if you're interested

Share this post


Link to post

The talk will not be recorded, but I will submit it again next year to the Indy.Code() conference (they didn't pick it this year, which I'm a little miffed about considering some of the slop they did pick, but I wouldn't have been done in time anyway 🙂).

 

The talk is more of a general overview of deep learning.  I chose Wolf 3D as my subject since the maps are close but not identical to the types of data you normally see in examples (and it's a good hook since people my age have nostalgia for it).  I figured I would learn more by going a little bit beyond what all the samples do.

 

I worked on and off with the non-ML portions of this project for about 2 years, mainly at weekend Hackathon events at my company.  I haven't thought much about what happens when the ML thing is "done" since it's been looming over me for a while, but there's a ton of stuff there that it would be a shame to abandon. The .NET portion of this ("Tiledriver") can convert binary GAMEMAPS files to ECWolf's UWMF (pretty sure this is the only tool out there that can do this).  I'm sure I'm missing some edge cases but it's probably 95% there.  There's a WPF GUI for it that could probably be turned into a level editor (I've attached a screenshot; not sure how to inline it and resize it).

 

I think generating maps is a fascinating topic too, but what I've found out is that I don't have much intrinsic motivation to work on Wolf 3D.  I... don't really like the game, and I can only play it for 10-15 minutes at a time before getting bored.  I would love it if ECWolf supported RoTT or Blake Stone properly since those are at least a little more interesting.

 

Still, I think Wolf 3D is a great subject for level generation since a straightforward map is conceptually dead simple, but they can have more advanced elements too (keys, patrolling enemies, push walls) that would be more challenging to integrate.

 

I had planned to look into messing with Descent maps next, since A) nobody to my knowledge has ever programatically generated a Descent level, B) it's 3D so I have to level up with all that math, and C) Descent allows for overlapping geometry, but humans quickly can't reason about it in a level editor.  Doom is an option too, but I feel intimidated by all the great things that have already happened in the Doom community (Oblige, WadC, etc) so I'm not sure what I could do that would be novel.

 

I think it's likely that I would like to tinker with the Wolf 3D stuff some more, though.  My ML attempts weren't terribly satisfying and I would at least like to get something that vaguely looks like a level out of it.  I would love to cooperate on something, and even having someone to bounce ideas off of would be great!

 

tiledriver.png

Share this post


Link to post

Descent is a great idea. I've often wondered how one would design something like WadC for 3D; but I had Quake-style brushes in my mind when I was pondering the question (and haven't done anything about it).

Share this post


Link to post
2 minutes ago, Jon said:

Descent is a great idea. I've often wondered how one would design something like WadC for 3D; but I had Quake-style brushes in my mind when I was pondering the question (and haven't done anything about it).

I was planning on messing around with octrees since they're practically the same as the level format 🙂.  That wouldn't get you any interesting curves or anything, though.  I think the cube-based level format would make some things easier and some things much harder; you can't design one side/wall of a room without taking the other side into account, for example.

Share this post


Link to post
4 hours ago, david_a said:

I was planning on messing around with octrees since they're practically the same as the level format 🙂.  That wouldn't get you any interesting curves or anything, though.  I think the cube-based level format would make some things easier and some things much harder; you can't design one side/wall of a room without taking the other side into account, for example.

 

Have you looked at how Cube or Cube 2 / Sauerbraten do things?

Share this post


Link to post

No, but that might be interesting, although the Cube 2 engine probably has infinitely less limitations than Descent does.

Share this post


Link to post

@david_a I couldn't see your PNG - I think the link might be funky.

I have lots of ideas, and yes, I'm receptive to discussing stuff, absolutely. I've got no time, but, other than that, I can listen well. Descent - there's an idea! Was Descent easily moddable? I can remember getting twisted around upside down, till I had no idea where I was, or where I had been. It was tons of fun for me, but anyone watching quickly got frustrated: "Turn!" "Go forward!" "You missed it, turn around!" In other words, I really sucked at navigating in Descent! I think I used a joystick.

 

I think cubes and prefabs built into cubes is a good starting point.

 

The more I think about it, I question the AI's role in all of this, since a more standard approach can take you very far. A glorified 3D maze generator, altered to reduce "twistyness," and avoid linearity at key spots could probably create convincing areas guaranteed to be connected. Teaching an AI good design seems much further out of reach, without tons of good data to feed it. I don't know how to automate the teaching quickly. It's quite the challenge, huh?

 

@Jon Cube 2 looks kick ass!

Share this post


Link to post

Descent supported user-made levels out of the box and shipped with a level editor. The level format is a bit weird - it's a bunch of "cubes" (hexahedrons, technically) linked together.  It's not nearly as flexible as something like Quake, and I suspect that simple things are easier but complicated geometry is much harder to express.  Descent used a portal renderer so it also (unintentionally) supports overlapping level geometry.  There have been a few proof-of-concept maps showing this, but it's impossible to reason about in a level editor so it's a fairly unexplored area.

 

Let's try the screenshot again:

Tiledriver.png.49e724ec5ce5e4edc43b83dedcc8a992.png

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×