AtticTelephone Posted December 9, 2020 Help, I'm dumb, I was using Slade3 and use that "import texturex or something from base resource" so I could use some cool silver textures from Realm667, however, I messed something up, because I used it on both the textures and the flats, which made PRBoom+ confused. How do I define which textures are flats and which aren't? This might also fix the issue of the flat silver textures appearing when I try to look for wall textures in Doom Builder 2. Please answer fast. 0 Share this post Link to post
Gez Posted December 9, 2020 You go in the texture editor and delete the entries corresponding to flats. They should be recognizable by the fact they're all 64x64. 0 Share this post Link to post
Kappes Buur Posted December 9, 2020 (edited) What may confuse you is the way how idSoftware established the use of images for walls and floors/ceilings. You must remember that at the time DOOM came onto the market most computers had only 640KB of memory, for both the OS and the app. So they devised a system of saving memory space where textures for walls could be fashioned from one image or multiple images (called patches). These patches, placed between P_ markers, could be combined in the Texture Editor to create a texture, of a certain size, by varying the offset of each patch. Then they were saved in the lumps TEXTURE1/2 and PNAMES. A FLAT is a different kind of image, always 64x64 in size (as Gez mentioned) and placed between F_ markers. That does not mean that one could not use images of size 64x64 as a texture when properly included in TEXTURE1/2 and PNAMES. Just make sure that the patch/texture names are not the same as the flats name. 1 Share this post Link to post
Gez Posted December 9, 2020 The main trick about the difference between flats and wall textures is that they're not written/read in the same way. A wall texture is defined column by column: if the data at byte X is for a pixel, the data at byte X+1 is the pixel below. (It's a bit more complex than that because there are transparent areas and of course each column has a bottom after which you go to the next column but whatever, let's keep things simple.) Whereas for flats, they're defined row by row: the data at byte X+1 is for the pixel to the right of the pixel from byte X. Why did they do things differently? Optimization. The trick is that since the viewpoint is always horizontal (so walls will always be drawn vertical, and floors and ceilings will always be drawn horizontal). Emphasis on drawn, because with a different perspective (if you look up or down instead of straight level) then vertical things stop being drawn vertically and horizontal things stop being drawn horizontally. The second important thing about perspective is that things get smaller with distance, so you have to scale them down. Here's where it gets clever: with the level perspective, all the points that are a specific distance from the camera are aligned horizontally if they're on the floor or ceiling, or vertically if they're on a wall. That means that for every column of wall, you only need to compute scaling (and lighting) once and then you can apply it to the whole column, provided you draw walls column by column. And likewise for flats, you only need to compute scaling and lighting once per line if you draw them line per line. So to optimize rendering and reduce the number of computations, you draw horizontal and vertical surfaces in two different ways, and that in turn leads to needing two different formats because if you're drawing walls by columns it's better if the texture data is by columns for the walls, and if you draw flats by rows it's better if the texture data is by rows for the flats. The consequence is that you can't mix textures and flats easily, though. 0 Share this post Link to post