Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

3t0

Members
  • Content count

    29
  • Joined

  • Last visited

About 3t0

  • Rank
    Warming Up

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I presume these are otherwise normal polyobjects that can be used as rotating doors, am I correct? This is great news! I was hoping for this such a long time.
  2. Got it. Even as an artifact in GL engine only, it's basically in same class as that boom sector leak thing. Just because it works (and probably will work in foreseeable future) it does not mean it's a feature. Pity :(. On the other hand I am starting to appreciate how abuseable doom is :). Makes perfect sense. I think I understand now fully, that the very thing that gives doom it's stability (community, tools and techniques wise) is the same thing which makes more experimental (and perhaps dead end) changes hard. This also explains developer "gangs" that tend form around engine ports form. Unless you are short of genius (ehm, ehm), it's really hard to keep the port going just by yourself. So multiple people sharing same ideas band together and they forward the port the direction they want. It seems this is the case with technically heavily extended but "lesser" ports. I never understood these weird forking patterns among doom ports, but this now explains it great. Sad thing with these advanced ports is they are not that popular. Thank you for enlightening me.
  3. Sorry, I forgot line action specials are NOT equivalent to ACS function names. I was ready to swear I have not seen such token in line specials, until I noticed it's actually ACS. I am interested in it being reliable in k8vavoom only - would custom 100% transparent 1x1px (or 8x8px or what would be best) PNG FLAT/LUMP do it? I was completely misdirected by the image, thank you for clarification. Now, I noticed you sidetracked ThingModel(?) (ThingMisc(?), MiscModel(?), well basically "UDMF" "directly loadable" 3d model thingy) issue very carefully ;). I guess same limitations would apply like with immutable geometry - would not be worth to support right (non backwards compatible, problematic dissemination, need of consensus for multi-engine support)?
  4. I am fully aware that that is possible already with 3d sector/floors. Problem is too much of such geometry makes modern dooms slow :(.
  5. Could not post some time because I had too much $$$work$$$, but I have found a little slot to slack here :) now. Okay, this sounds seriously cool, now is there some ACS or LineAction that allows one to flip/assign/unassign midtexes? There has to be :D ! You have no idea @Gez - that is exactly what I tried recently (some months ago). I wanted to emulate organic/rocky wall but it is some time since I messed with 3D modeler. ~20 years ago I used to be 3DS MAX guy, but since then I moved full-time to Linux, and that got hard. But, knocking on wood, it seems that blenderistas finally pulled their heads out of their asses, so I hope it will get easier, to do 3D stuff in Blender from now on. So after incredible self torture I somehow managed to get Blender's MD3 exporter working and did some really shitty "surface patches" (terrain like) and used "-" texture in map instead of +-0.01 unit trickery you suggest (walls are thus transparent). And it works ... kinda. Doom editors complaint about missing texture on such walls ("-") but at least I eschew useless redraws and don't have to fuck around with positions (z-wise form doom wall normal). Maybe I should try "TNT1"? @ketmar would there be some other way how to make "HOM transparent" 1sided walls? Now where I hit first tragic limitation is this: shitty doom thing naming scheme - doom sprite names are very limited. Keep in mind this was just research prototyping stage (I don't get anywhere with my prototypes though), and I already gave up. I wanted some "3dwall" reusability, so I came with this idea in my head: I will make several "wallmodels" (as I call them) with multiple sizes and shapes of walls. Then I will place them along my "inviso" 1sided walls to give illusion of detailed rocky sides. The problem is that I would need several dozens of such models and each one eats a slot (as a thing) in dooms sprite table. Even if I use only "single frame" thing to anchor the "wallmodel" to, organization of these things was quickly getting unmanageable, even at just 12 "wallmodels". I had to keep 1:1 "map" between actual md3 model name I wanted to use, thing ClassName and shortened thing name (eg. WLMD01), and this was becoming confusing really quickly ... so I gave up. I wish doom had "misc_model" or rather "Thing_model" (that would allow to pick md2/md3 (and skin at least) arbitrarily)? I don't know @ketmar do you think something like that would be possible (or even is already)? Using DECORATE for these structural "mapmodels" seems like really wasteful way to go about it. So for now conclusion is: "unrealistic". Okay I understand his trick making top and bottom part of a wall for crude multitexturing - are you telling us there is a way to achieve same effect without the helper sector, just on pure 1sided wall? Here with it! Each screenshot looks better and better :)
  6. Now wow! This is really neat hack, that I never realized is possible, thanks for inspiration. I guess with proper texture (pure white or pure black) and very fast door closing times (I hope generic doors can set that) this would work exactly as well as HL2 occluders, I guess. Hope one can disable door sound somehow. One more question: would this work even with ACS and by moving ceiling and floor in-place together from a script? I presume "door" is any sector whose ceiling height is equal to floor height, am I right? So just moving ceiling down instantly should be enough to manually clip-off any level part, right? Finally, I noticed, that in GZDoom, when one uses a MD3 model, it is by default two-sided, eg. each tris normal faces both forwards and backwards, despite tris being single sided in and showing as such in any normal MD3 editor or ioquake3 engine. I roamed through zdoom wiki, but never found a way to get normal 1-sided model rendering working. I have question: is proper MD3 rendering (single sided) supported in k8vavoom?
  7. No problem, I hope for more modern k8vavoom maps form you instead.
  8. Sorry I could not help myself, so this one of those longer ones but please, consider reading it. I did not realize how much lightmapping affects the whole thing (I see now, in quakes everything is pre-subdivided by the map compiler, but k8vavoom must do it online) - I guess this this one is not your favorite part of the code. I understand that movement code has nothing to do with rendering performance. What I was trying to say is that, from your analysis too, numerical stability of Doom's movement code is just very lucky coincidence, or rather: it woks relatively solidly for what it is, but result is basically achieved by an very ad-hoc solution. As I understand it now, no 3d matrices, or quaternion multiplication ,or any other unified standardized approach is used, instead, if I understood it right, for top down 2D movement and rotation vector addition and complex numbers multiplication are used(?), and then 3D z-element is handled ad-hoc on case by case basis. This means that doom movement has it's own peculiarities that are doom movement relevant only. Solving this issue by ripping out the code and replacing it with more proper 3D movement code, like in quakes or other 3D games, is of course possible, but it would then irrevocably and terminally, broke the game for purists (I imagine SR50 thing would now not work, so goodbye speedruns and so on), or basically, everybody else. Players have spent too much time in honing their skills to the current system, so it's essentially impossible to replace it, as it would then be a completely different game. As I understand it, this makes it impossible for this "issue" to ever be "really fixed". This movement is still very good solution for Doom and similar games, giving them "a doom-y feeling", but it basically became a design constraint. It took some time for this to sink in (like 20 years :) ) and it was only your explanation of the algorithms, that was sufficiently clear enough, about the state of how things are done, for me to finally grasp this. This also made me realize why other engine coders are so reluctant to touch the movement subsystem - it's such a sensitive subsytem it would immediately blow up with players and other coders into a lot of drama, breaking the game and mods in the process (I already read somewhere how zdoom movement code is not sufficiently vanilla). It would be cool feature to have for "new indie" game, but besides Hedon (which I am great fan of) and that "rabbit thing", and perhaps few platformers in the making, we haven't really seen much of standalone games with doom engines. Optimizing things for that use case would be like ignoring 99.9999% of actual engine users. So I finally realized, that unless one is willing to maintain their own full fork (there are few such corpses on github cemetery) it's essentially impossible to change the movement code in any major way (unless some genius of course comes up with a way how to do it somehow, which I doubt). Yes I was aware of that, I still have not realized the implications fully. So the immutable geometry is basically only way to go with modern hardware - you upload all your stuff into the card, including the buffers and shaders and then let it handle shit from there, with only "few" handful PCIe control commands passed back and forth. This way CPU is free to do other shit and GPU can do it's thing too, memory bandwidth is not eaten, as they both are more or less separate entities. No costly geometry uploads killing the transfer bus. Got it. Hmm, so even if vertices are bound to their position by BSP rigidity and unable to move on X,Y (topdown) plane, the Doom geometry is still not really immutable, because Z-height can change at any time (lifts, doors - hmm, this is neat hack though, it allows to "reuse" BSP tree for any height :) ) - and yes we are ignoring polyobjects for now, which can move on XY axes, but don't have Z dimension - talk about limitations!). Hmm at first, I thought you just need to analyze sector-linedef "action links" and you are all set, but then I realized with ACS (and other scripting) you can change any sector height at any time right? So they might be doing exactly "the action links" analysis, I naively thought up: they can pick out immutable clusters of level (everyhting that is not a lift or door), converting those into immutable geometry buffers, and gain rendering speed, but then we would lose the mutability, that advanced level designers are actually after. Got it. So basically due to freely movable sector heights, we are unable to get speed optimizations, despite the fact vertices are rigidly pinned down in X,Y plane by BSP ... that is so sad. I always wondered about this - could not height calculations be somehow factored in? Something like Z-respecting 2D view cone/triangle clipping in secondary pass? Or it's not worth the effort? But anyway, in current state of affairs, this basically means, that all doom engines will always be stuck in something akin to semi-immediate OpenGL mode shuffling most of the geometry stuff back and forth between the CPU and GPU with expensive change detection passes in between, all this to accommodate for "possibly anytime" z-moving geometry. Hmmm, all this is strongly logical, but rather sad ... ... but I am the guy who tends to ask stupid questions and comes up with stupid ideas, so please, let me to stir the hornet nest a tiny bit more. When I read about your 1D clipper treatise, I realized, that it always struck me strange, why engine coders are reluctant to offload these things onto the ultimate accelerator eg.: mappers themselves. This IS old school thinking (nowadays younglings are brainwashed into mantra: "throw more hardware software and ai at the problem as human time is much more expensive") but map designers themselves are the "cheapest" optimizers available (just like experienced assembly coders are). More over, by now, it is pretty clear, that doom exists outside of time (like this CPU-GPU mismatch proves), and people willing to map for it must "degrade" themselves into "back then " mindset anyway, whether they want it or not. Mapping for doom is such an act of labor and love, that mappers learn and "waste" their time with elaborate doom-only techniques anyway. They also become doom experts in the process as well. So they are gonna invest their time anyway. Finally, there is this company which abused its map designers often, at the expense of the cpu time, and it turned out to work for them pretty well: Valve. Anybody who tried mapping for Half-Life back then, pretty soon learned that without navigational "node mesh", the "phenomenal" half-life AI is pretty dumb (I dare to say stuck at Duke3D level, which is abysmall). AI info nodes are its meat and potatoes, and without them, HL AI is almost clueless. So, if you wanted to have good play for your SP map (as a level designer) you had to place those damn nodes to annotate the environment for AI to use. These days, there are packages, that can do this semi automatically (for other engines), but back then, it was serious labor. Once you had everything meshed up, if you decided to change a level now and your node meshwork became outdated, good luck: you had to rewire everything again. So this was Valve's solution, not something that Carmack or Romero would do (who liked to minimize and automate such extra work as much as possible). What surprised me even more, was introduction of occluders in Half-Life 2. Because it's still borked Quake 1 engine with some slight polish :), there seem to have been some problems with geometry performance, similar to what we are discussing here. Instead of solving this programmatically as Carmack would do, Valve introduced a new brush function type instead, to occlude geometry based on toggle-able state and possibly distance (if I remember correctly) and offloaded the optimization work to the mapper. Now whole areas of the level could be turned off or on visually, with flick of a trigger. If I remember correctly, this has been used to toggle immutable part of geometry (like streets behind windows and such). So where am I going with this whole thing? Would a new linedef that could act similarly as this HL occluder (toggleable by switches/ACS and what have you), eg turning visibility of whole chunks of level based on player position/triggers, help out? I went trough several line action specials but I have not seen any Line_BlockVis or something like that there. Of course this poses a problem when line visblocking state desyncs with player state (but there are other things in modern dooms already that can desync too), but this would be in full control of the mapperm so it ould be their responsibility in keeping this working. I think it would be pretty "small" modification, with a possibly huge gain, that could help advanced mappers a lot. Or no? Would it be doable? And if yes would it be worth it? While accepting this "offloading onto a mapper" mindset and thinking a lot about this immutability thing since yesterday, yes I could not sleep well, second idea came to me. Why not then just offload immutability calculation onto designers as well? Let me elaborate: with vanilla levels, immutability is a given and mutability detection is simple: everything that has action link (door, lift) is mutable, everything else isn't, period (there are probably few special cases, but this is manageable as you said. Now what is problematic is mutability of advanced levels, the ones, that would benefit from immutable acceleration the most, as you wrote. But there might be a way: old formats proved to be insufficient for the advanced mapping anyway, so I would dare to say, that majority of adavnced mappers is now using UDMF for everything. UDMF has many interesting properties (like incredible zip compressablity - as it is pure text) but the thing that fascinates me the most about it, is its insane extensibility. It is basically relatively simple key/value storage system, with few extra data structures tucked, that can accommodate addition of new features very easily. This means that it would be possible to define new subclass of UDMF map, one with "DEFAULT_IMUTABLE" feature flag set, where all sectors are immutable by default, but other sectors still have mutability possibility, through, for example, per sector settable mutability hint/flag. This would map well to mapper workflows, and would require only slight editor extension (it's basically just addition of new prop per level and checkbox per sector in sector dialog). Usually, in a map, ~90% - ~99% sectors are immutable anyway, so for normal geometry, there would be no additional workload. Now once you have lift or door or any Z-movable sector for that matter, this one would not be valid/fully usable, until marked for mutability as well. Now, if you tried to Z-move immutable sector, and by any means (action special, ACS, Zscript, whatever), eg one that is not marked for mutability, engine would refuse to do so, spitting up error: "Context x{action special, ACS,....} tried to z-move immutable sector: xxx". This way both mapper and player would be notified immediately, that there is error in the map, at runtime, which is important as we both agree. If sector was marked as mutable however, it would move up and down just fine as intended. Naturally, this would impose more work on mappers, as each time they make a door or a lift or any scripted area, besides creating action links and scripts, they would have to explicitly mark affected sectors for mutability as well. But it would also provide great benefit to engines themselves. The engines could now churn through all those immutable "static sectors" and compile them into efficient display lists for very fast rendering. Mutable sectors would, of course, have to be handled same way as they are currently, but with this approach they would automatically became special cases, instead of being default case. However now, no complex and error-prone mutability analysis would need to be done at engine level, as all that work would be offloaded to mapper, who is single final and ultimate instance, that knows which part of map needs to be mutable and which not, anyway. While this would introduce some extra workload on advanced mappers, it could also, potentially, offer great gains in performance, while still keeping most of the things as they are, for compatibility with older maps and whatnot. I don't know how much it would complicate engines internals, as there would now be two codepaths - one very fast for immutable level geometry and then the other, slow one for everything else, that is movable. But who knows, maybe it would still help internal engine organization. I realize, this would be pretty big change community wise, requiring coordination and getting many other individuals and teams on board, but what I am interested in for now is technical feasibility and implementability and implementation cost, than marketing and actual realization. Do you think it would be worth it, given the work and changes required?
  9. Hmm, so I built new version and it seems that under stencil + shadowmap mode I am really not seeing any sparklies so far, congratulations! Do I understand it right that lightmapped renderer is completely separate codepath that shares only very small amount with stencil+sm one? Or how come that only in stencil+sm path t-junctions get fixed? For now I always played on lightmaps, but I am surprised with performance of stencil+sm mode, at least in gunrock's (I think it was them) remasters I am getting constant 130 fps no sweat, same for vanilla IWADS. Of course my daily drawer is nothing much, compared to shit avail these days, but still: xx:xx.x VGA compatible controller: NVIDIA Corporation GK106GLM [Quadro K2100M] (rev a1) (prog-if 00 [VGA controller]) Unfortunately while Remilia's map is indeed beautiful (it took me some stalking and digging around to get to the map, as I am bilnd :)), and good job by the way, the "overloaded" areas there fry my measly rig at 25-32 fps. This was a bit disappointing for me to see. Sadly I am really overloaded with work here, so I haven't had much time to observe doom-world happenings. But still, the little amount of time I have, and your (ketmar@) analysis of doom movement code (eg. "slapping on of 3rd dimension onto what is essentially 2d movement base code" and resulting inconsistencies from that) and recent release of "final" Arcane Dimensions, made me once again, investigate Quake's worlds more. What really surprised me is current frame rate "stability" of all major Quake descendants, especially that of Quakespasm and Quakespasm Spiked but also of FTEQW as well. On the other side, all the (major) modern doom engines, including GZDoom and k8Vavoom (I haven't tried KEX stuff like Doom64 (I guess is the game) which I suppose is quite different internally), have performance problems, when you enable too much shit. Don't get me wrong, vanilla levels almost always play rather well and solidly, but at certain point, no matter which custom modern wad and engine combination I pick, my machine starts to deep fry potatoes. This made me wonder, is there something else (besides movement, which makes it essentially impossible to have solid physics in Doom), inherent in Doom descendants, that makes it impossible for modern Doom renderers to maintain Quake-like framerates and fuildity? Or is that really just cost of dynamic lighning models and codepaths that modern Doom engines tend to pick? After being spoiled by Quake's constant framerate stability for last few weeks (during afterwork play), even in monster heavy mapjams (it seems like, that with current quakespasm/hardware generation (eg. hw in range -7 years - yes don't laugh), Quake seems now able to reach and maintain almost Doom level monster counts, while still keeping impeccable gameplay fluidity), playing some new doom maps was a quite jarring feeling. This was little disheartening to experience, as I was always big Doom fan. I do realize this is a bit unfair to Doom engines (apples to oranges), after all I guess, Quake is really much simpler internally now, than any modern Doom engine is, making heavy use of baked-in lightning, multi-texturing and full scene single-pass rendering, it can probably spit frames much faster than any Doom with PBR and stuff, but I would still expect at least k8Vavoom in lightmap mode to be able to outperform, or at least to keep up with it. This does not seem to be case, so I am curious whether there is an opinion about possible causes of this? After all, Doom weaponry and bestiary still seems to be the best one around, even when compared to Quake. After some time you get tired of nerfy shotguns and spongy Quake monsters higher up the ladder and their embedded unfairnes (Pinkies are really cute little puppies when compared to Fiends, Shamblers are equivalent to Archvile evil (yes I hate Archviles) and they don't even revive the dead, and Vores and that damned Slime things are fucking ridiculous). I would really welcome something that plays like Doom but maintains Quake's stability. So is this even theoretically possible with current Dooms (and where they are heading)? Finally, how much of k8Vavoom VavoomC vm is directly from QuakeC vm?
  10. Given that all BSDs are trying to move away from "tainted" evil of GPL compilers as soon as possible, and clang becoming de-facto standard there, this is rather weird limit, but I digress. Any simple explanation? Even "religious reasons" is acceptable, to keep it simple and not start flames. Or it is "too complicated"? Not that you can completely avoid gcc with ports currently, either.
  11. Explains why you don't have a problem with context sensitive smart editor syntactically live parsing stuff all the time, like the QtCreator does. I started using it because it has pretty decent C99 support (not C++, I don't know C++) and at the time I was hacking on php, and it has ability to navigate through C symbols very quickly. It was great when getting to know php codebase, awesome timesaver, mainly to look under hood how stuff works. Unfortunately most stuff in k8vavoom is messing it on my setup :(. So you are not using even cscope (not sure if it knows c++) or something like that? Guess I have to find something that can still intelligently jump around but does not caca itself on k8vavoom src. Who doesn't have them? Let me guess urxvt? I swear Slackware users are the strangest of the bunch, in a good sense.
  12. That's why you build at least zmirror zpools (thus you have at least 2 copies even of master structures). Those zmirrors are rather fast and not so computationally demanding, when compared to higher order duplication configs like zraid1 (raid5), zraid2(raid6) and zraid3 ("raid7"). Even those are not such hogs as popular opinion of gossip spewing swarm of million faceless parrot-drones on the internet want you to believe, and can run even on modest rigs. For example I am almost exclusive zmirror user. Of course we are talking true mirrors here, like linux dm mirrors, freebsd's geom mirrors or openbsd's softraid mirrors. That means you can do 3-way or four-way mirrors, if you are paranoid, eg. setups where each element device is fully functional 1:1 copy of each other. Even if you loose 2 drives in 3-way mirror you can still boot. Why I got sidetracked into this? Because for some reason btrfs has some weird shit raid1c{3,4} mirrors, that I don't understand, and I have hunch that people working on btrfs either :). Are devices in these special mirrors 1:1 full copies? Are they not? Who knows? Will it boot? It seems to me, btrfs is just bag of features thrown-in randomly and nothing makes sense. Yet the current generation of young linux users is telling me how awesome btrfs is. Then when you ask questions like this, they look at you, with a blank stares of deers in headlights. They seem to really be just kids playing with their osx lindows clones, arguing about unsubstantial details like graphical interfaces, while UAC corporation clones are running the whole show from shadows. It's really sad what we have come to :). Regarding the hardware crc32 checks, it's even sadder. I know it's hard to believe, but those don't really work. It seems unbelievable, until you actually saw real disk "controller" mangle the data and zfs fix it. See it a couple of times, and you realize, that belief in checks in your drive is completely not grounded in reality. You wonder how thing even works. The builtin error correction is not that good (most likely not good at all), especially in consumer products. The thing is, you are not "helping" the hard drive, you are actually taking defensive and adversarial stance to it - you don't believe that hard drive - hard drives lie. All of them and all the time. One could say "I couldn't care less". But I am a data hoarder, so I visit some datasets only after very long pauses, sometimes even after years, and I want to be able to get that data out, perfectly, even if it's something completely banal, like catgirl pictures. That's why I am genuinely surprised when somebody brings up valid points, not often talked about, like you did. btrfs doesn't really inspire strong belief in it's capability of getting data out after years of no use. But I also used to run ext4 with data=journal on top of dm raid1 for longest time, so I actually understand where you are coming from. I just wanted to tell you, that if you are prone to data loss like I am (and it seems that you are ... because some of us are just "lucky") there are ways nowadays to make that an non issue, and they are not even that "expensive", so maybe you could give it a test ride. Consider it like friendly advice "been there, done that, this was my way out". With this said, I think we don't have to revisit this theme ever again, ever. That much is in my eyes some serious dataset to not be at least mirrored :). But that is just me, whatever floats your boat. I asked because I was decommissioning one of my ancient machines and I have one pair of 1TBs 3.5" from it. You would have to trim your set or rather add it in as an secondary mirror. Slovakia, EU. So we share a border. Welcome to our new and amazing western world, shall you build great and real capitalism, just as we did. Just remember to not buy western food, for us western citizens of lower status, it's of lower quality only, not what true westernes are allowed to consume :). Regarding the packaging I believe that ton of bubble wrap could fix it, but yes I know very well these state operated services might be a tiny bit ... unreliable. Especially at complicated times, like we are living now. So, you are like what, on 64-bit cpu in 32-bit OS in 32-bit mode? Capped at 4 gigs? Or what? 32-bit CPU using PAE (because last time I ran something like that, it was nice nice little bag full of cute surprises)? Definitely. I tried some fiddling around codebase, but for some reason my QtCreator gets really confused by this repo (common occurrence with non QT-based, non-uber-c++-nerd projects). Just curious what code editing environment you are using (maybe I asked already but forgot, so please just remind me)?
  13. I already played the demo and I dare to say it was much better than new whispers, getting stuck wise (no offense: whispers are really densely populated and tad claustrophobic). Still those archviles on skill 1 are killing me ;). Understandable, (un)fortunately I cannot stomach modern games anymore, so should I ever buy desktop that won't bother me. Oh my god, I swear this is first time in my life, that somebody else who is linux user, and they aren't behaving like stupid starry eyed anime fan girl, hyping this Ora..., I mean, Cyberdemon made FS to the moon, and is looking at it with well deserved suspicion. And don't get me started with their insanely weird RAID1 scheme, which somehow is not RAID1. For a moment I almost spilled my coffee. Trust me it's not that bad, you "only" need to dedicate 1-2GB per TB to it, on usual workstation, and that's probably exaggerated. If you have 2GB+ mem like 4GB or 8GB it's good ratio of value and convenience vs performance. I even think this rule is suggested only when you have dedup, so you could go lower, but by capping it reasonably, not that much memory is lost to zfs. It will return memory to OS if it needs it anyway, albeit it will cling to it as long as it can. I am running zfs for almost a decade now on everyhting where I can and it was always worth it, never let me down. How much mem do you have? From my playthroughs everything works besides menu display and config generation, is there any specific reason k8-vavoom autosaves config each time it exits? I had to make it immutable for my changes to stick. Any way I can help this getting fixed? That means you should have SATA, I think. How big dataset do you have? Within 100 GB ? 1 TB? If you are in your starving artist period (I've been there), low on hardware (and I've been there) maybe I could donate pair of used 3,5" drives to you. Of course only if you can handle the paranoia :). I have a hunch you are not far away for them to be shipped by post.
  14. So, I had really brutal year and a half. A new job, buried a parent, broke my leg, not much time for doom and quake stuffs. Slowly getting back to dooming and quaking. Is this thing still going on? Seems that bit of momentum got lost. BTW I pulled-in recent tree, built it and: paging in multi-page (too many lines to fit to single screen) menus seem fubared once you get past the last visible line, redraw is broken, you can pick lines but don't see it, weird drawing issues. I have trouble binding MOUSE4 and MOUSE5 from menus maybe off by one error or mouse buttons iteration ? this is pretty crucial for me as I started using "back" and "forward" buttons on mice to switch weapons (much better than wheel) you can bind through config file, but every config save from menu it will break it MOUSE4/5 are "invisible" in menu list (hard do describe - empty space but not ---) and are not detected when pressed on "press a key" prompt @Gunrock I tried "wispers remastered" and it's really beautiful visually, but rather brutal on UV. Besides I hate archviles and these are inviso even. Spaces seem too cramped for so many monsters to me (at least), and movement is quite hard in the narrow dungeons with so many details. @ketmar I am surprised that as a Linux person you are not using something reliable like ZFS (yaeh, I am that kind of heretic) or at least "fedora fanboy's btrfs", do me a favor and get it working ASAP. What rig do you have? A notebook, a desktop?
×