Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content count

  • Joined

  • Last visited

About Jerry.C

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. So what you have seen and what I have seen is not the same. What I have seen is that I can run nearly every Boom compatible map out there without any sort of performance issues on a less than optimal system. So your supposed "performance improvement" doesn't really improve things at all on any reasonable computer these days. It apparently didn't even improve things 14 years ago - too bad that I do not own such an old computer to prove it. I see how you are trying to play this. You set yourself up as the all-knowing expert, you talk a lot but say little, but if someone posts actual numbers they are just summarily dismissed because they do not match what you claim. So, if you want to keep up the impression that REJECT actually is beneficial, post something that proves it and not lengthy walls of text where the lack of information is hidden in the mass of words. It's clear that you do not follow through with the math. If you have 2 ms of think time on an average computer you have 4 ms of think time on a low end computer - or if you get even lower you end up on a system where the map cannot be played and no magic is going to help. And you keep talking about 28 ms - as if that had any relevance these days! The magic number is not 28 ms but 16 ms for 60 fps interpolated gameplay. Also, think time is not sight checking time - it is running the entire monster AI and movement logic as well. Sight checking is only a minor part of it - if you read my previous post you'll see that even on the largest map I tested I was unable to get GZDoom to something higher than 0.2 ms. Yes: 0.2 ms! That's the entire time spent in the sight checking code, and this is the entire time that can be saved by providing a REJECT table - but even here you still have to factor in the effect on the CPU cache. This map has roughly 16000 sectors, meaning a REJECT table of 32 megabytes! Fortunately the ZDoom family of ports is smart enough to add a simple check to the sight checking code that only accesses REJECT if it is there, so these 32 MB never hit the cache. I wonder what would be worse: Not being able to shortcut the sight checks or the constant scattered hits on this large block of memory that would otherwise pollute the CPU cache like crazy. Unfortunately I cannot test it because nobody has written an UDMF-capable REJECT builder. Speaking of smaller maps, if the sight checking hovers far below 0.1 ms per frame, there's really no need to investigate. If you start with nothing all you can save is nothing. And speaking of other ports, unfortunately they lack the user-side profiling capabilities so I'm stuck with GZDoom for such measurements. Yes, I considered that, but it's ludicrous outside of fabricated test cases. But let's assume this could happen. Even then you couldn't save more time than was actually spent - which for the largest map around was 0.2 ms! If you spend 0.2 ms in a certain section of code, no magic in the world will save you more than those 0.2 ms. On top of that you also need some basic understanding what causes sight checks to be performed. Most are done by active monsters near the player. For which REJECT naturally does not shortcut. Indeed. The point I was trying to make is that you really need some very, very old and very, verx weak computer for this to become relevant. and that very, very old and very, very weak computer will not only increase its sight checking time but everything so in the end the relation of stuff is still the same, i.e. very low. Let's be clear: REJECT made a lot of sense in 1993 when an average computer had trouble rendering 320x200 at a stable 35 fps. In that scenario saving a handful of percent was extremely important and every cycle indeed mattered. But you can twist it like you want: My first Pentium 90 already played the largest maps of the early years at stable 35 fps - none of which actually contained a REJECT table, save a very few exceptions - and over the years processing power increased a lot more than Doom map complexity so we never ever got close again to the cost-benefit-ratio REJECT provided initially, mainly because the advent of higher resolutions put increasingly more time on the renderer side than the gameplay side. Those assumptions are based on real-world numbers from various sources, among them Steam's hardware survey or GZDoom's own survey or what can generally be read in the tech press. All those come back the same: It is a vanishingly small minority who still use computers that may fall victim to what you claim here. All that would matter if the time was in any way significant. One thing I learned about optimizing is to optimize where it actually matters. It makes no sense doing fine grained performance analysis where some rough measurement clearly shows that it's insignificant. And the rough checks I did yesterday clearly show that it is indeed insignificant. If I cannot manage to get this part of code to spend more than 0.2 ms per frame, which is 1.25% of the critical time interval of 16 ms. Optimizing this code cannot save more than 1.25% even in the best case scenario. And even then it would only be relevant if the total frame time spent would be slightly below those critical 16 ms. No matter what, if a map runs bad on a system, you won't make it be able to run well by shaving off one or two percent of execution time. If you are in such dire straits you have to make bigger compromises, like reducing the screen resolution to save time on the rendering. Unlike you I have meaningful results. I already posted them and they clearly show that even on extremely large maps the time being spent in the code that can be shortcut is mostly insignficant. Which is very much unlike this: If you make such claims, where's the map being used, the port being used and the hardware specs? You accuse others of being unspecific and here we got a random claim that isn't backed by any information whatsoever. I absolutely cannot respond to it because I know zip about the scenario. I totally have to trust that you were right in your assessment of the situation. So please, practice what you preach and back your claim with some actual numbers! In particular, that it is indeed REJECT that made all the difference and not some random occurence with the network that just happened to coincide with the state of REJECT. There's no denying this? Tell me, why do I have to trust your empirical "proof" that apparently just comes from subjective observation? Again:Post the numbers! Without numbers to support it, all you make is a completely unsubstantiated claim about your personal impression but it's backed by nothing. I am not fighting anything. But what I do not like is that you make some broad claims about performance improvements but have nothing whatsoever to show. You never made a techical analysis, all you said is based on personal impressions and we are forced to trust your personal impressions. Even Graf's 14 year old post went deeper with that. He said "5% on a large map of its time spent on sight checking", that's at least something to start from and it shows that he actually took the time to run the code through some profiling. So for the third time in a row: Post some technical proof that REJECT really helps!

    1. Tristan


      That's zahl the evidence we needed, son.

    2. Cacodemon345


      Are you sure Jerry.C = Graf?

  3. Tell me, what kind of computer are you using? Because where I live, I have to take out some huge UDMF-monstrosities for any of this to have even the slightest bit of relevance - and for those I cannot build REJECT anyway - not that it'd help, because GZDoom dumps REJECT entirely if there's portals in the map, which these days seems to be normal for port-specific projects. But when it comes to anything vanilla or even most Boom stuff I have to look hard to find anything that ever dips below 60 fps with either GZDoom or PrBoom+ - let alone 35 fps - on playsim related load. 99% of the time it's the rendering that causes frame rate drops. To be honest, I have a hard time finding maps not making use of extensive DECORATE where the entire per-tic think time exceeds 2 ms per frame in GZDoom, which unfortunately is the only port capable of displaying such metrics - I'd love to see numbers from a simpler ports for comparing. Have you even read the linked post? I'll help you out - here's the relevant quote: We should also note that this analysis was lacking one crucial component: 5% of how much? If it was 5% of 20ms it would be relevant, but this is unlikely. If it was 5% of 8 ms, which I'd consider reasonable for a system of this age, we'd end up at 0.4 ms sight checking time, and a saving potential of roughly 0.2 ms So with the Hexen algorithm the sight checking time will probably be reduced to less than half because from what I read it scales a lot better to large maps. And as of now, all the 3 major ports (i.e. PrBoom+, GZDoom and Eternity), which are the only ones relevant when talking about strong limit removal are actually using a variation of the Hexen algorithm for sight checking. And my bad for not testing myself, but I rectified that mistake now. So trying to get some real numbers? It's hard because most maps I've thrown at GZDoom's "stat sight" report an average time of 0.0 or 0.1ms per tic for sight checking i.e. it's merely background noise. Even the largest map out there - ZDCMP2 - reports an average of 0.2 ms on my laptop equipped with a 3 year old 2.5 GHz Core i5 Dual Core, which is as run-of-the-mill as CPUs come these days. Remember, that is total sight checking time, not just the checks that would get shortcut by REJECT. So it's these 0.2 ms that can theoretically be saved by REJECT. Let's say we have a success rate of 50% and we end up at 0.1 ms. On the largest map that ever was made - and a map with mostly contained areas where REJECT would really be beneficial it amounts to close than nothing! Wanna talk about the 'classic' sight check algorithm? Ok, but then we have to get down to maps of the size of Vrack3, maybe. Currently I cannot measure sight check on this because it just reports 0.0, so even with an old claim that on such large maps the BSP based check was 5 times slower it'd still be 5x background noise. I can also run this map with software rendering at a constant 60 fps on the same system at full HD resolution so what does it matter anyway? This is one of the most detailed Boom maps out there and it hardly breaks a sweat on any account. There's also no need to test wide open maps like Frozen Time. These may put the renderer to its limits but are a textbook case for bad REJECT performance because it'd mostly be 0s with that area where every sector can see into every other one. So you can say what you want. If you save 50% of nothing you end up with no improvement. The time spent in sight checking is totally irrelevant in the big picture when everything, including high resolution rendering gets factored in. In case you are still not convinced and point to older, slower computers - sure, they still exist in smaller numbers, but they'd have far more trouble keeping up with all the rest of the needed work and the sight checking part would still be irrelevant. Thinking of it, it was irrelevant 14 years ago and since then computers only have gotten faster. So in the end you can twist it and turn it as much as you want - the simple matter of fact is that the time saving potential here is not high enough to warrant the hassle of spending several minutes building a large resource. This only scales if you run on an old clunker system that should have been retired 10 years ago.
  4. A piece of fail on all accounts. The idea was stupid in any case. It was even more stupid to make the entire thing depend on some arbitrary movement limitation of the game and to put the icing on the cake, it was broken and crashed when being used in a savegame. The worst thing about it was that every 'self-respecting' megawad copied the concept.
  5. It's really not that easy because those color sets are dependent on where the game puts its player color translation in the palette and on the palette itself. In Hexen, there's even different color ranges to translate for the different classes.
  6. I didn't want to do any testing myself so I searched a bit and found some really old post about this subject with some actual numbers: https://www.doomworld.com/forum/post/370100 So the savings are minor even with an extreme scenario on a map that seems to be tailor-made for a reject speed-up, entirely disregarding that this info is 14 years old and retrieved on far weaker systems than what is on offer as low end these days and b) no, Reject is not free! Building a good reject map takes time. I still remember that when I checked out RMB it could run for several hours on such large maps. I have no idea how it is today, but I'd say it's still several minutes to build a good one. And what if you use a modern port that doesn't load an empty reject and just skips the check instead? Suddenly that 'free' table results in some avoidable CPU cache pollution and might actually SLOW THINGS DOWN, because that map can become quite large with rather scattered access patterns. So it's not as clear cut as you make it out. That's only the case when running an engine that doesn't change how it was used originally and on hardware that does obey the same principles as what was considered state of the art in 1993. That cache thing is a real concern. Today it may actually be beneficial to have a slightly less efficient algorithm if it results in less hits of uncached memory.
  7. Jerry.C

    Nvidia launches RTX 2000 series

    You are mistaken here. GZDoom is purely CPU limited, the GPU actually sits idle most of the time. You can easily verify this in 3.5.0 by playing around with the scaling slider in the menu. It's very unlikely that it'll make any difference at all. Unless you switch on all advanced effects or render in ultra-high resolutions, it won't be any faster on a 1060 than on a 650. That's the main reason why I see little point upgrading my graphics hardware. For the games I am playing that older card is good enough and I have a hard time finding anything justifying the expense. Full resolution shadowmaps in GZDoom are not worth it really, nice as they may look.
  8. Jerry.C

    Good Things About the Doom Movie

    The only good thing it did was to pretty much end the directorial career of Andrzej Bartkowiak, which I am eternally grateful for.
  9. Jerry.C

    Nvidia launches RTX 2000 series

    The price would be absurd if it didn't sell for it. But from the looks of it the enthusiasts cannot wait to own one and are willing to pay that price. Why not milk those people up front before servicing the mainstream customers. These are essentially what finances NVidia's research.
  10. Jerry.C

    Nvidia launches RTX 2000 series

    That's hardly a "modern" graphics card... The CPU in my system comes with an Intel HD 4000 and just for fun I once installed drivers for it and ran GZDoom. It clearly was underwhelming to put it mildly.
  11. Jerry.C

    Nvidia launches RTX 2000 series

    Surely it does. But the only thing this card cannot do is using shadowmaps at full resolution. I have to reduce this from 1024 to 128 to make it work without slowdowns. Aside from that, GZDoom is hardly an engine that can put a modern graphics card under full load.
  12. Jerry.C

    Nvidia launches RTX 2000 series

    Who wouldn't? But it really pays off not to go cheap on the CPU, that'll really allow it to last for many years.
  13. Jerry.C

    Nvidia launches RTX 2000 series

    Tell that to my 650... But since I do not play modern games there's hardly any point upgrading. Doom ports won't run better for it.
  14. Jerry.C

    Skybox viewpoint in software mode

    Post your map if you want an answer what's not correct.
  15. Jerry.C

    Nvidia launches RTX 2000 series

    10 years ago no one cared because it simply was not usable under real world conditions. Hell, 10 years ago, hardware was barely capable of programmable shading for the first time. I cannot even find what this was all about, it seems to have gotten lost in the vastness of the internet. But if this actually works in real time it will be huge. Of course its viability first needs to be proven, but mark my words: This is the first step into a new era of hardware rendered graphics. And remember: You cannot expect the first step to be an unqualified success, but as a whole real time ray tracing is what every developer had been wishing to use in their games for decades, it just was too slow. And even if this first generation of capable hardware may be too expensive or not powerful enough to do it at a fluent 60 Hz, this won't mean the end. Just like programmable shaders got better and more performant over the years, so will this.