Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
ketmar

k8vavoom: no good thing ever dies!

Recommended Posts

19 minutes ago, ketmar said:

...so it will become outdated the moment new hardware arrives. and different hardware does different things in different ways, so you will inevitably be forced to write "general renderer", "renderer for GPU X", "renderer for GPU Y", and so on. or at least choose some different internal code pathes here and there. oh, yeah, that was exactly what they promised to solve, aren't they? only now you don't have a high-level abstraction, so underlying library cannot decide what would be better.

 

That risk may exist, but the basic idea here is that GPUs work by having command buffers submitted and that's unlikely to change because in the end a GPU is just a machine taking instructions and instructions are normally given as a list. How those command buffers are built is not that relevant, you got the API to abstract that part away.

 

Even now there's parts in Vulkan that are essentially no-ops on some supported hardware.

But you can talk as much as you want, high level has no place in a field where performance is this important.

 

 

 

19 minutes ago, ketmar said:

 

it is somewhat like declarative and iterative programming: with explicit `for (int i = 0; i < arr.length(); ++i) doSomethingWith(a[ i ]);` compiler is obliged to do a loop in the written form (or try to guess what programmer really wanted). and with `foreach e in arr` compiler has alot more freedom -- it can decide to parallel this, or choose array elements in different order. that is, explicit iteration may be better as a short-term soluiton, but it is worser as a long-term solution, and to add insult to injury, it obscures programmer intentions (what to do) with irrelevant details (how to do).

 

 

Yes, but a command buffer has no loops. It's just a one-dimensional list, nothing more, nothing less. And in the end, this is the least problematic part in Vulkan. The really nasty stuff is all the synchronization and any driver is free to do them as no-ops if some hardware appears that doesn't need it, or if the feature that needs to get synchronized results in no operation (yes, that does exist.)

 

 

19 minutes ago, ketmar said:

if anything, OpenGL should get more declarative abstractions, not vice versa. and no, it didn't hit a wall. there are many different reasons why vendors wants to get rid of OpenGL, but "it hits a wall" is not among those.

 

Yes, it is. The "wall" is the inability to submit draw commands in multiple threads. In today's world that's a killing blow because multiple cores have become the only real way to make computers faster.

 

 

19 minutes ago, ketmar said:

yeah, Vulkan will magically allow me to get computation results before i send enough data to compute those results. what a wonderful time-breaking API we have now! can i use it to get unwritten k8vavoom code from tomorrow, so i can simply use it today?

 

Then you should try to find different places where multithreading is feasible.  Don't ever think my first attempt was successful. I needed three tries until I found the right idea that works - and it works great with a performance boost of 10-20 % depending on the map with a near zero overhead for the actual queue management. The only synchronization I need is to let the main thread wait for the helper thread to finish processing the queue, i.e. waiting for one event per frame. You can't get it much cheaper.

 

 

 

 

 

Share this post


Link to post
18 minutes ago, Ferk said:

And I'm genuinely interested in your source port

tnank you! ;-)

 

18 minutes ago, Ferk said:

so I'd rather stop discussing on this thread to stop contributing to the derailing.

sometimes heated discussion is what i need to relax. a small paradox. ;-)

Share this post


Link to post
24 minutes ago, Graf Zahl said:

Then you should try to find different places where multithreading is feasible.

i already identified several such places.

 

first, moving the whole renderer to separate thread will allow both VM and renderer to enjoy full 16 msecs for their business (at the expense of one added frame of latency; nobody will notice at 60 FPS anyway ;-). internal vavoom architecture is almost ready for this.

 

second, i have dynamically traced lightmaps for dynamic lights, so dynlights can cast proper shadows. a perfect candidate for parallel processing.

 

third, there is alot of unused interframe coherency. not directly related to threading, but still something worth exploring (especially if i'll augment map data with PVS).

 

that's why i said that i have a huge (albeit somewhat coarse-drawn ;-) roadmap. vavoom roots are in days of single-core CPUs with weak GPUs, and we can do alot of things better now. thanks Janis, he did created a very good foundation, that can be improved alot without significant rewrites.

 

and yes, one of the things i am planning to do is to upload as much data to GPU as i can, and reuse it. so in the end i am doing mostly the same thing that you're doing in GZDoom, only from different POV, and with different API. ;-)

 

12 minutes ago, Graf Zahl said:

So we do have something in common after all! :D:D:D

lol. looks like it! ;-)

Share this post


Link to post

p.s.: and i want to admit it again that i HAET doing low-level gfx programming. no, really, i hate it with a passion. so each time i see Vulkan and heaps of little irrelevant (for me) things i have to do to convince it to render a triangle, i am screaming: "no, god, no! please, no, ohmygod no!!!" ;-)

Share this post


Link to post

I'll try not to get too involved in this OpenGL vs Vulkan fight, but there's a couple of things I'd like to mention:

 

1) OpenGL 1 display lists has never been accelerated by any display driver ever. They made some design mistakes in how those display lists deal with state in such a way that they cannot be precompiled into a command buffer on any hardware. I can't remember the specific reason anymore, but I think it was something along the lines of the display list still relying on external OpenGL state.

 

2) Regarding fillrate, the Vulkan API actually makes a great deal about trying to address tiler GPUs. Something that wasn't even invented in the OpenGL 1 age. In the mobile world every GPU is a tiler. The basic idea here is that the hardware chip can work on a small tile of the total frame buffer using on-chip memory and avoid reading or writing the memory to the actual frame buffer memory. But the GPU needs help to know when it can avoid the costs of flushing out memory from the chip. This particular feature will never make it to OpenGL as it was redesigned specifically for Vulkan after they had already failed to find satisfying solutions in OpenGL ES and OpenGL 4.

 

3) My personal opinion with regard to OpenGL 1 is that the fixed function is actually more complicated to learn than a shader based world. The only advantage it ever had in my book was the simple glBegin+glVertex+glEnd design, but that can be replicated in Vulkan fairly easily. Create one persistent mapped buffer for the vertices, fire off one vkCmdDraw call for every 3rd glVertex written into that vertex buffer.

Share this post


Link to post

OpenGL is excellent for people who don't give a flying fuck about gfx, and just wanted to render their triangles and run away. it is not compilcated -- i.e. any sane engineer can pick it up in minutes. from the other side, it allows you to make your hands dirty, if you want that (but still on a fairly high level).

 

going back to our analogies, Vulkan in assembler, OpenGL is C. with C, you lost some features asm people can use to write efficient code (like access to carry flag, for example), but creating bigger things are way easier, and required entry level is way lower. you still can create powerful things with it, but you need good compiler to produce good code.

 

most (if not all) OpenGL problems and limitations can be solved without throwing away any convenient features. yes, that mean that OpenGL will be even bigger, and from some PoV, clumsier. but nobody ever said that OpenGL is a small library (and if somebody did said that, they're liar ;-).

 

low-level gfx people will always see OpenGL as big fat monster with unacceptable limitations ("i cannot live without accessing my carry flag, no software can be made fast without that!"). and GPU vendors will support those people, rolling out various macroassemblers, declaring OpenGL "dead", and so on -- for their own reasons.

 

yes, creating good OpenGL implementation requires something GPU vendors won't do even if the fate of mankind will rely on that: cooperation + open specs. because the only way we can get good OpenGL implementation is to go FOSS (hello, linux!). also, Khronos has all the cruft usual "design committee" always have, and they have zero interest in OpenGL. they simply stealthly moved away from it by introducing "modern OpenGL"; i knew that there will be Shiny New Gfx API the day they "deprecated" OpenGL, and was watching lazily which vendor will win the game, shoving their proprietary API into standard. (btw, did you noticed that Vulkan wasn't developed by Khronos? that is because Khronos cannot do it -- like any committee, they are unable to produce even half-good things)

 

next higher-level API will inevitably be "OpenGL reinvented", just like people keep reinventing things like message-passing OOP, coroutines, actor-model concurrency, and so on.

 

yes, OpenGL needs a facelift. and X11, for that matter. but throwing the baby away with the bathwater is not a good solution. it is never a good solution. and the baby always returns, only dirty and injured, so you're wasting time trying to undo the damage done.

 

that's why i said that OpenGL will not die, and i'll be watching its glorious return. maybe in another jacket, though. ;-)

 

 

p.s.: yes, i am exceptionally good at explaining myself. especially if i have to talk about things i see as obvious ones. after all, other people are telepaths, and will understand me anyway, aren't they?! and my broken English helps too.

Edited by ketmar

Share this post


Link to post

I get why you don't want to use Vulkan directly and I'm not saying you are wrong for not using it.

 

However, I disagree with your assembly vs C analogy because it is missing one important aspect about OpenGL: it is an old "assembly" complicated by reasoning that has since then turned out not to be a very optimal way of doing things. As a result of that, it isn't particular good for high level and it also isn't very good for low level. It is the equivalent of insisting on using 8086 assembly on a computer that now is 64 bit and uses deep pipelining. It may work, but it is all simulated by the hardware at this point. It also isn't a nice way to work because that old method still relied on (now) pointless near and far pointers.

 

Now I'm not saying you should switch to Vulkan, because if the interest isn't there then that's never going to work out well. What you should probably consider at some point is finding a new high level solution to your problems as the old OpenGL assembly was maintained by people that has little to no interest in the high level. They will only maintain your fixed-function OpenGL as much and as long as their few remaining high profile targets is still working with it.

Share this post


Link to post

i took C as an example exactly because it is not a good choice for modern CPUs too. C is tied to old ways of doing things, like OpenGL. they both need a facelift. ;-)

 

anyway, i can always implement OpenGL subset i need for k8vavoom (and other my pet projects) on top of another API. i am a die-hard fan of DIY. so i am not afraid of OpenGL EOL. meh, i always can revive it (or at least die trying ;-). maybe in the end i will have better OpenGL than OpenGL, and set new high-level API standard; why not -- it is always fun to go for a higher goal. ;-)

Share this post


Link to post
2 minutes ago, ketmar said:

i took C as an example exactly because it is not a good choice for modern CPUs too. C is tied to old ways of doing things, like OpenGL. they both need a facelift. ;-)

The facelift for OpenGL was 3.0 core profile, which you did not like. ;)

Share this post


Link to post
6 minutes ago, dpJudas said:

The facelift for OpenGL was 3.0 core profile

core profile is not a facelift, it is completely different thing. they simply refused to change the name. "OpenGL in name only" mod. ;-)

Share this post


Link to post
1 hour ago, ketmar said:

OpenGL is excellent for people who don't give a flying fuck about gfx, and just wanted to render their triangles and run away. it is not compilcated -- i.e. any sane engineer can pick it up in minutes. from the other side, it allows you to make your hands dirty, if you want that (but still on a fairly high level).

 

Yes, for that particular scenario it definitely gives quick results. But I have yet to find any solution that's good for quick results but also for long term sustainability. Most of the time these things stand in the way of each other.

However, any such software would still be better off by using some middleware and not depend on old technology which the programming world at large has lost interest in. Because in that case, if the underlying implementation changes, the chances are good that the middleware adjusts - just look at the major 3D engines - they all can target various backends simultaneously so that they abstract themselves from those gory details.

 

 

1 hour ago, ketmar said:

 

going back to our analogies, Vulkan in assembler, OpenGL is C. with C, you lost some features asm people can use to write efficient code (like access to carry flag, for example), but creating bigger things are way easier, and required entry level is way lower. you still can create powerful things with it, but you need good compiler to produce good code.

 

 

Bad anology. It's more like Vulkan is C and OpenGL is Javascript. While there's good solutions already to make Javscript performant, the language still suffers from serious design problems that lead to memory leaks and RAM waste by modern browsers and in general a buttload of very shitty code out there - and these issues cannot be fixed without changing the language. Its entire design basically ensures that you cannot do good memory management and protect yourself from badly written code. And now take a look at the solution to the problem: It's called WebAssembly - it's a low level construct that has been designed to be a compilation target for a wide range of languages - not only Javascript but also C and C++.

 

And just like few people write complex things in C anymore it is still the main building block of more complex things - and that is where Vulkan will fit in long term - not as an API that is programmed directly by application code but as a unified layer below nicer things. The thing is just so new right now that we haven't reached that stage yet.

 

 

1 hour ago, ketmar said:

 

most (if not all) OpenGL problems and limitations can be solved without throwing away any convenient features. yes, that mean that OpenGL will be even bigger, and from some PoV, clumsier. but nobody ever said that OpenGL is a small library (and if somebody did said that, they're liar ;-).

 

Yeah, making things ever more complex is definitely a solution to modernise them. The problem is just - it doesn't work. The more complex it gets the more bugs will creep in, the more possibilities there are the harder it gets to find the performant path. Especially on the driver level such an approach is utterly deadly - a driver needs to be lean and streamlined, to focus on allowing access to the hardware, nothing more, nothing less.

The problem with such fat libraries is that all that code still needs to be maintained and at some point they just crumble under their own weight. This is actually the main reason why so much software turns into shit over time, unless its makers clean house at some point. OpenGL never really cleaned house, due to certain developers' pressure the intended cleanup was rendered pointless by introducing the compatibility profile and by doing a half-assed job and not fixing the real problems (like the global single threaded state of the underlying design.)

 

 

 

1 hour ago, ketmar said:

 

low-level gfx people will always see OpenGL as big fat monster with unacceptable limitations ("i cannot live without accessing my carry flag, no software can be made fast without that!"). and GPU vendors will support those people, rolling out various macroassemblers, declaring OpenGL "dead", and so on -- for their own reasons.

 

Let me repeat: Just because you do not acknowledge the reasons does not mean they do not exist. The main reason why all this oldfangled stuff was deprecated has very different reasons - as graphics programming transitioned to shaders and buffers, the old approach made no sense anymore.

I think it's a moot point to discuss the result - it was a disaster. OpenGL 3.x core profile was the worst of both worlds - it forfeited any option to do quick'n easy programming and it also failed to expose the new features in a good and efficient way. It took several more years, until OpenGL 4.4 when they finally managed to fix the design flaws by adding persistent buffers and saner resource allocation semantics. With this system it is actually very easy to replicate immediate mode programming again, but without the drawbacks of having an explicit API for it. You essentially get the same result with only 10% of needed API calls.

Of course it was too little, too late. Several hardware generations out there are incapable of using these features so it is unavoidable to use the broken stuff for older hardware if you cannot choose a compatibility profile.

 

You are also putting the blame on the wrong people. Those who declared OpenGL "dead" were not the GPU vendors but the graphics programming community at large.

 

 

 

1 hour ago, ketmar said:

 

yes, creating good OpenGL implementation requires something GPU vendors won't do even if the fate of mankind will rely on that: cooperation + open specs. because the only way we can get good OpenGL implementation is to go FOSS (hello, linux!). also, Khronos has all the cruft usual "design committee" always have, and they have zero interest in OpenGL. they simply stealthly moved away from it by introducing "modern OpenGL"; i knew that there will be Shiny New Gfx API the day they "deprecated" OpenGL, and was watching lazily which vendor will win the game, shoving their proprietary API into standard. (btw, did you noticed that Vulkan wasn't developed by Khronos? that is because Khronos cannot do it -- like any committee, they are unable to produce even half-good things)

 

Yeah, committees often indeed cannot produce even half-good things. And here you got the reason why OpenGL is in such a dismal state. Any attempt to fix and modernize it was obstructed by "interested parties" (read: organizations who have a vested interest in preserving the status quo.) As a result we first got GLSL at a time when it was totally unusable on the hardware of its time. Next we got an API overhaul that took out all the convenience without replacing it with anything adequate while carefully ensuring that none of the really bad parts got fixed, and once reality sank in that the revamped product was a pile of shit, instead of really fixing it, they brought back all the old cruft through the back door. And this was seriously the time when it was clear that OpenGL was heading for extinction, because in order to really fix it they'd have to do a second wave of deprecations and then address the problem in the underlying design. And once you get there, you'd probably end up with a less verbose variant of Vulkan anyway.

 

Let's be honest: Instead of Vulkan I would have very much preferred to see a fixed OpenGL - and by "fixed" I mean to really and thoroughly eliminate the global state that has been plaguing the API forever and then reimplement the parts that depend on the global state in user space in a way that they no longer dictate how the driver needs to be implemented.

 

 

1 hour ago, ketmar said:

 

next higher-level API will inevitably be "OpenGL reinvented", just like people keep reinventing things like message-passing OOP, coroutines, actor-model concurrency, and so on.

 

No, it definitely won't be OpenGL reinvented, it will be different because at the very least any new API will have to pass some sort context as parameter to each API function, not work on some implicit global state. Implicit global state is a totally obsolete context and causes problems with any software depending on such a thing. Doom itself is a good example of code that has major reentrancy issues and a design that relies on a homogenous global state, this is why there is no way to parallelize the playsim logic.

 

 

1 hour ago, ketmar said:

 

that's why i said that OpenGL will not die, and i'll be watching its glorious return. maybe in another jacket, though. ;-)

 

While there will most definitely be some higher level API for less performant use cases, it will work differently than OpenGL.

I think for many non-real-time cases AMD's V-EZ layer for Vulkan may be an option but unfortunately it is very inefficiently implemented and only works for smaller graphics loads, for a game engine the overhead is too high. But if someone learns from this project and eliminates its bottlenecks we might get a command buffer based API without all of Vulkan's nastiness.

 

All that said, I think you are a bit too emotionally attached to the old way of doing things.

While I realize that with your old graphics hardware you have no other choice, once you do get a new graphics card I'd suggest to experiment with persistent buffers and I guarantee you that this won't be that different from immediate mode if set up properly, and using user-side code for matrix management definitely has its advantages because you have far, far better control about how they get used. For example, in GZDoom I added a third matrix to the transformation stack. This was a pain in the ass to do with immediate mode because OpenGL chose to only have two matrices in a stack, making it a hassle to operate on world coordinates for everything.

 

1 hour ago, ketmar said:

 

 

p.s.: yes, i am exceptionally good at explaining myself. especially if i have to talk about things i see as obvious ones. after all, other people are telepaths, and will understand me anyway, aren't they?! and my broken English helps too.

 

Your English isn't that bad, actually. I never had problems reading it.

 

Share this post


Link to post
1 hour ago, ketmar said:

core profile is not a facelift, it is completely different thing. they simply refused to change the name. "OpenGL in name only" mod. ;-)

 

Actually, no. Core profile is what most graphics developers were already using even with OpenGL 2.1, i.e. they used vertex buffers, they used shaders, they used external matrix libraries and they used generic vertex attributes instead of dedicated ones like glVertex, glTexCoord, etc., where the vertex attribute stuff is the least important thing here.

 

This is also the programming model that D3D was already enforcing, it didn't even have anything like immediate mode anymore.

D3D9 was more or less the equivalent to the core profile feature subset in OpenGL 2, but with far better buffer semantics and shader support (at that time HLSL was far, far better than GLSL which had far too high requirements on the hardware)

 

 

 

 

Share this post


Link to post

now i think that i better undestood what you meant, and i agree with many of your points. but i still have something to say. ;-)

 

47 minutes ago, Graf Zahl said:

Bad anology. It's more like Vulkan is C and OpenGL is Javascript.

yep, it can be viewed like that too. "typescript", may be. ;-)

 

47 minutes ago, Graf Zahl said:

any such software would still be better off by using some middleware

most middleware are so monstrous, and have so many dependencies (or a huge dependency by itself), that it is not any way easier to use. if anything, it is harder to use, beacuse you have to learn a new framework. it is something different from a library (althrough it is sometimes hard to draw a clear line between framework and a library ;-). besides, OpenGL is usually comes with OS (or with video drivers), and something like UE4 doesn't. ;-)

 

47 minutes ago, Graf Zahl said:

Yeah, making things ever more complex is definitely a solution to modernise them.

OpenGL is a modular design. adding things to modular design is easier than adding things to monolithic one. it may be harder than doing a new design from scratch, but having backwards compatibility is important -- i learned that lesson hard many times. ;-) OpenGL extension system is not ideal, of course, but it does its job, and it still can be used to "fix" OpenGL. it is manageable.

 

47 minutes ago, Graf Zahl said:

as graphics programming transitioned to shaders and buffers, the old approach made no sense anymore

those are minor implementation details, actually. OpenGL just needs a better ways to create various data sets, with more fine-grained immutability and parameterisation. it doesn't really matter how the underlying driver will implement that.

 

47 minutes ago, Graf Zahl said:

but the graphics programming community at large

there is no such thing. ;-) i am doing graphics -- am i part of that community? different people have different needs. what you are referring to are people doing AAA videogames (and other high-to-very-high quality realtime graphics). while their efforts are most visible, they aren't the only people doing gfx. ;-) people like me, for example, just want to "get the work done", without investing much time into highly-optimised soluitions. we know that our solutions aren't the best ones, and aren't most performant ones, but we're ok with that. OpenGL gives us exactly what we need: easy API to start with, with options to go deeper if we want.

 

47 minutes ago, Graf Zahl said:

at the very least any new API will have to pass some sort context as parameter to each API function, not work on some implicit global state

this is a minor change, actually. i'd like to get explicit context too. but that won't magically make OpenGL really different: current global state API calls will be just implemented on top of new explicit-state-passing calls. not a big deal.

 

47 minutes ago, Graf Zahl said:

All that said, I think you are a bit too emotionally attached to the old way of doing things.

i just hate some "modern" things that was designed with the assumption that i have nothing else to do besides mastering them. it is ok to have such things, but they are not replacement for higher-level things. if Vulkan won't be shoved into my throat as "OpenGL replacement", i wouldn't have anything against it.

 

47 minutes ago, Graf Zahl said:

and using user-side code for matrix management definitely has its advantages because you have far, far better control about how they get used

if anything, OpenGL needs more slots for matrices. GPUs are exceptionally good at matrix operations, yet i have to use something like GLM to do my vector/matrix math on CPU instead. i smell something wrong here.

 

47 minutes ago, Graf Zahl said:

Your English isn't that bad, actually. I never had problems reading it.

tnx. i never properly learned it (it was mostly from reading technical documentation and IRC chats), and i know that sometimes it can be unclear, and sometimes it may be rude without my intention. so i appologise for that.

Edited by ketmar

Share this post


Link to post
6 minutes ago, ketmar said:

those are minor implementation details, actually. OpenGL just needs a better ways to create various data sets, with more fine-grained immutability and parameterisation. it doesn't really matter how the underlying driver will implement that.

 

 

6 minutes ago, ketmar said:

 

different people have different needs. what you are referring to are people doing AAA videogames (and other high-to-very-high quality realtime graphics). while their efforts are most visible, they aren't the only people doing gfx. ;-)

 

No, but when I said "the graphics programmers community 'at large'" I meant that the vast majority is agreeing with the direction. Who cares about small-scale independent GPL game developers like us there...?

 

 

6 minutes ago, ketmar said:

people like me, for example, just want to "get the work done", without investing much time into highly-optimised soluitions. we know that our solutions aren't the best ones, and aren't most performant ones, but we're ok with that. OpenGL gives us exactly what we need: easy API to start with, with options to go deeper if we want.

 

this is a minor change, actually. i'd like to get explicit context too. but that won't magically make OpenGL really different: current global state API calls will be just implemented on top of new explicit-state-passing calls. not a big deal.

 

You won't see me disagreeing here. OpenGL 4.5 and 4.6 made a giant leap in the right direction, though, but what it'd really have needed is to define a core profile #2 where the problem stuff (like implicit context and legacy methods to create resources) would have to be excised as well. But the explicit context alone would have made this  a separate API anyway so my guess is that they thought it isn't worth pursuing and doing something new right away. Even with a slimmed down OpenGL it would still have needed support for the old stuff, only making the driver even more complex than it already is.

 

Is Vulkan the perfect result? Certainly not - it is far too low level and far too verbose for that and it probably ensured that we'll never see a higher level but modernized graphics API, unless it was implemented as middleware on top of it. But this won't happen overnight. A few attempts were made already, but none really was the home run we'd need. The biggest problem right now is the lack of production-quality code to look at. The Vulkan tutorials out there may be good at showing how to set things up but they are all far too basic overall and completely miss the point that a low level API needs proper abstraction and should never be programmed directly from outside the abstraction layer.

 

 

6 minutes ago, ketmar said:

 

i just hate some "modern" things that was designed with the assumption that i have nothing else to do besides mastering them. it is ok to have such things, but they are not replacement for higher-level things. if Vulkan won't be shoved into my throat as "OpenGL replacement", i wouldn't have anything against it.

 

Agreed here. Vulkan, D3D12 and Metal simply cannot replace a higher level API for simpler tasks (but on the other hand, 3D game programming is not such a 'simple task' anymore.) Which makes Apple's decision all the more grating. OpenGL ES, despite its limitations, was a great way for flexible cross-platform 2D rendering so I seriously have to wonder what Apple wants here.

 

6 minutes ago, ketmar said:

if anything, OpenGL needs more slots for matrices. GPUs are exceptionally good at matrix operations, yet i have to use something like GLM to do my vector/matrix math in CPU instead. i smell something wrong here.

 

No, I think you are actually falling for a very common misconception here. This is not how OpenGL's matrix system works. Just because it is part of the API doesn't imply that it is done on the GPU. In fact, all operations in it are strictly on the CPU side, unless the matrix gets uploaded into the transformation buffer. The only hardware accelerated operation you get is what in GLSL 1.2 shaders is the 'ftransform' function, i.e. the transformation of the input vertex to screen coordinates.

Doing the common operations like glMultMatrix, glRotate, etc. on the GPU makes no real sense - a GPU is designed to efficiently process a massively parallelized task, but a single matrix multiplication is 32 float multiplications, or optimally 8 vector multiplications - before you can send the request to the GPU, the CPU will long have finished the job itself. In fact, I think in most implementations these functions never hit the driver at all, they just manipulate the CPU-side copy of the matrix and mark it for upload it to a buffer when a draw call gets issued. And once this approach becomes apparent it should also become clear why this internal matrix stack is such a bad thing: Instead of letting you control your matrices' state and optimize it to the known use cases and possible changes, the driver has to perform constant tracking, validation and and copying of the data.

 

 

 

 

 

 

Share this post


Link to post
6 hours ago, Graf Zahl said:

The biggest problem right now is the lack of production-quality code to look at. The Vulkan tutorials out there may be good at showing how to set things up but they are all far too basic overall and completely miss the point that a low level API needs proper abstraction and should never be programmed directly from outside the abstraction layer.

and tbh, i don't expect this to change soon. that is, professional gamedevs have their engines with their higher-level API (and probably even direct access to vendors to ask questions), so they neither need, nor interested in creating "simple middle-level libraries". and most other people got bored by all that manual work (if they ever struggled through initial setup at all).

 

of course, it is not Vulkan fault, but rather vendors (and Khronos) simply don't care about hobbysts. sooner or later people will build something usable, but now... eh.

 

6 hours ago, Graf Zahl said:

but on the other hand, 3D game programming is not such a 'simple task' anymore

partially because you have to do too much little things from scratch. even today, people are reading about OpenGL Core, and: "wut? shaders? pipelines? options? wtf is going on here?! i just want to draw my wall!" i mean, having simple textured level displayed is important to not lost courage. one still can use good old "fixed pipeline" OpenGL to render simple textured hallway in several lines of code. of course, in the final engine nothing of this will survive, but for starters, simple immediate mode is godsent.

 

back in old days the biggest stopper was writing efficient texturing (and partially -- efficient s-buffer and other tricks in the book). with GPUs and OpenGL i hoped that beginners would be able to use something simple to quickly start. but now we're back at point zero: it is all complicated again. sigh.

 

6 hours ago, Graf Zahl said:

I think you are actually falling for a very common misconception here. This is not how OpenGL's matrix system works. Just because it is part of the API doesn't imply that it is done on the GPU.

it is another implementation detail. i mean, implementation can do it in several ways, including using GPU. but there is no way to add more matrices into transformation sequences (or define new sequences). i didn't meant using GPU to repform single matrix operations, of course -- the roundtrip alone will swallow any speed gain. sorry, i have ideas, but never bothering to properly explain them (partly because dw software hates me, and stops accepting posts if i am spending too much time writing; and copy-paste into new window looses all proper quotes for some reason; and it is impossible to add more quotes when you're editing your post -- all quotes goes to another editor, and cannot be copypasted; it is also not a simple plain textarea, so my userscripts cannot work with it; dammit!).

 

and again, OpenGL transformation matrices are good for getting started. but then there is no way to control the things, they are hard-coded. everything we have in old-style OpenGL can be built upon more flexible API, mostly without hard-coded things. still, i see it being more high-level than Vulkan. i prolly should sit and write a draft of how i see it someday. and maybe even try to implement it. ;-)

Edited by ketmar

Share this post


Link to post
1 hour ago, ketmar said:

it is another implementation detail. i mean, implementation can do it in several ways, including using GPU. but there is no way to add more matrices into transformation sequences (or define new sequences). i didn't meant using GPU to repform single matrix operations, of course -- the roundtrip alone will swallow any speed gain.

 

Strictly speaking, yes, it is an implementation detail.

Practically speaking it isn't. No OpenGL driver in existence has ever implemented this part on the GPU, because that simply is not efficient. The GPU is efficient when you perform several 1000 math operations in parallel, which happens when processing a large batch of vertices or a large batch of pixels with the same settings, but for doing a single math operation it really is not. As a single threaded math processor - which this essentially would be it couldn't keep up with any CPU at all - that even goes for those super-expensive Geforce RTX cards. They all get their power not from doing single operations lightning fast but from doing a huge number of similar operations in parallel.

 

So theoretically speaking, what would it mean if the matrices were on the GPU? The answer is, very little - possibly nothing, but most likely a performance hit. They are data that frequently changes so the GPU either needs to keep a large buffer where every iteration of the matrix is stored or it needs to update the matrix for each change. And that's not that much different from having the buffer on the user side where the user knows precisely what the matrix is used for, when it changes, and so on. And since the user knows better what's up, such code on the user side is nearly by definition more efficient. OpenGL cannot know itself if you need to read it back, or want to push it onto the stack or if you just want to upload it to the GPU. So all it can provide is a very generic implementation that has to cover all potential cases - and that is very inefficient. A call to glMultMatrix is doing a lot more than just performing the 64 multiplications, things user side code doesn't need to do when it knows more about the matrix's context.

 

 

 

1 hour ago, ketmar said:

and again, OpenGL transformation matrices are good for getting started.

 

Sure. But using a math library does not change much here, except for the fact that you do not have a small set of system provided matrices but you instead can define your own variables and manipulate them as you like. Plus, GLM does not just offer matrices but also vectors and other useful constructs that do not exist in OpenGL itself

 

 

 

Share this post


Link to post
3 hours ago, Graf Zahl said:

So theoretically speaking, what would it mean if the matrices were on the GPU?

i meant adding more matrix transformation slots for vertex/attributes, with the ability to selectively turn them on/off. it may be useful in various cases, and it may be faster to toggle one flag than to recreate the whole matrix transformation when you need to temporarily include/exclude something. then model, world, and projection matrices will become simple ordinary predefined slots.

 

3 hours ago, Graf Zahl said:

But using a math library does not change much here, except for the fact that

...i don't have it at hand where i have OpenGL available. ;-) i always dreamt about having basic vector/matrix math functions included in OpenGL, so i don't have to drop yet another library in each project where i just need to do some gfx. it may, or may not be speed demon, but it is handy to have it there.

 

sure, that means adding even more code to the library, but this part is basically independent, and need not be "accelerated" with GPU (there is no sense doing that, as you pointed, and i agree).

 

that is, i want OpenGL to be useful "out of the box" for doing various gfx tasks where execution speed is not critical, but programmer's time is. because i can't realistically imagine gfx task (besides "hello, cube", prolly) where you won't need doing some algebra. just a little handy feature.

Share this post


Link to post
Spoiler

[ 48%] Linking CXX executable ../vavoom-dedicated.bin
CMakeFiles/vavoom-dedicated.dir/p_clip.cpp.o: In function `VViewClipper::ClipAddSubsectorSegs(subsector_t const*, TPlane const*)':
/home/danfun64/Documents/doom-src/k8vavoom/source/p_clip.cpp:1339: undefined reference to `r_draw_pobj'
CMakeFiles/vavoom-dedicated.dir/p_clip.cpp.o: In function `VViewClipper::ClipAddAllSubsectorSegs(subsector_t const*, TPlane const*)':
/home/danfun64/Documents/doom-src/k8vavoom/source/p_clip.cpp:1377: undefined reference to `r_draw_pobj'
collect2: error: ld returned 1 exit status
source/CMakeFiles/vavoom-dedicated.dir/build.make:3332: recipe for target 'source/../vavoom-dedicated.bin' failed
make[2]: *** [source/../vavoom-dedicated.bin] Error 1
CMakeFiles/Makefile2:724: recipe for target 'source/CMakeFiles/vavoom-dedicated.dir/all' failed
make[1]: *** [source/CMakeFiles/vavoom-dedicated.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

 

 

 

Share this post


Link to post
32 minutes ago, ketmar said:

i meant adding more matrix transformation slots for vertex/attributes, with the ability to selectively turn them on/off. it may be useful in various cases, and it may be faster to toggle one flag than to recreate the whole matrix transformation when you need to temporarily include/exclude something. then model, world, and projection matrices will become simple ordinary predefined slots. 

 

I'm sorry, but I really don't get what you want to achieve with this that cannot be done with existing features. With modern OpenGL you can define as many matrix uniforms as you need.

In GZDoom I have several non-standard matrices - and I had them for many years, even back when the engine still was running on GL 2.1.

 

 

32 minutes ago, ketmar said:

...i don't have it at hand where i have OpenGL available. ;-) i always dreamt about having basic vector/matrix math functions included in OpenGL, so i don't have to drop yet another library in each project where i just need to do some gfx. it may, or may not be speed demon, but it is handy to have it there.

 

 

You are forgetting something here: Someone has to maintain the code. In the end it's extraneous to the driver itself and has no place there. I am sorry to say, but this is actually a very, very lazy justification for letting others do your work.

 

 

32 minutes ago, ketmar said:

sure, that means adding even more code to the library, but this part is basically independent, and need not be "accelerated" with GPU (there is no sense doing that, as you pointed, and i agree).

 

It's still there, it still needs to be maintained, Tests need to be set up for it, it increases compile times and so on. And it doesn't provide anything of value to the driver. So it ultimately costs money that doesn't bring back anything in return. Not much but it adds up.

 

32 minutes ago, ketmar said:

 

that is, i want OpenGL to be useful "out of the box" for doing various gfx tasks where execution speed is not critical, but programmer's time is. because i can't realistically imagine gfx task (besides "hello, cube", prolly) where you won't need doing some algebra. just a little handy feature.

 

Every single OpenGL tutorial these days points to GLM which is relatively pain-free to set up. It's a header-only library so all you need is #include "glm.h". How much easier can it get?

 

Share this post


Link to post

Here's that backtrace you requested. Sorry it took so long.
 

Spoiler

#0  0x0000555557ebeb60 in ?? ()
#1  0x00005555556d452e in ObjectSaver::IsError (this=0x7fffffffc490)
    at /home/danfun64/Documents/doom-src/k8vavoom/vccrun/vcc_run_serializer.cpp:386
#2  0x00005555556d24fd in VObject::execappSaveOptions ()
    at /home/danfun64/Documents/doom-src/k8vavoom/vccrun/vcc_run_vobj.cpp:232
#3  0x00005555556b1344 in RunFunction (func=0x555555c55f30)
    at /home/danfun64/Documents/doom-src/k8vavoom/source/pr_exec.cpp:490
#4  0x00005555556b14e5 in RunFunction (func=0x555557bb58b0)
    at /home/danfun64/Documents/doom-src/k8vavoom/source/pr_exec.cpp:541
#5  0x00005555556b17d3 in RunFunction (func=0x555557c1d860)
    at /home/danfun64/Documents/doom-src/k8vavoom/source/pr_exec.cpp:581
#6  0x00005555556b1ad4 in RunFunction (func=0x555557ca2890)
    at /home/danfun64/Documents/doom-src/k8vavoom/source/pr_exec.cpp:616
#7  0x00005555556bd1bf in VObject::ExecuteFunction (func=0x555557ca2890)
    at /home/danfun64/Documents/doom-src/k8vavoom/source/pr_exec.cpp:2899
#8  0x00005555556e48eb in VGLVideo::onEvent (evt=...)
    at /home/danfun64/Documents/doom-src/k8vavoom/vccrun/modules/sdlgl/mod_sdlgl.cpp:2274
#9  0x00005555556e4c1d in VGLVideo::dispatchEvents ()
    at /home/danfun64/Documents/doom-src/k8vavoom/vccrun/modules/sdlgl/mod_sdlgl.cpp:2358
#10 0x00005555556e54bb in VGLVideo::runEventLoop ()

 

 

 

Share this post


Link to post

@Danfun64 as for dedicated server -- it is generally not supported between "official" builds. don't try to build it.

use-after-free bug in vccrun fixed, thank you alot! your backtrace immediately pointed at it. everything should work now.

Share this post


Link to post
1 hour ago, Graf Zahl said:

With modern OpenGL

i am never talking about that. ;-)

 

1 hour ago, Graf Zahl said:

I'm sorry, but I really don't get what you want to achieve with this that cannot be done with existing features.

using more transformation matrices without writing shaders, and conditionally build final matrix on GPU. basically, offloat matrix multiplication to GPU, and have a way to selectively turn each matrix in a chain on/off. of course, there are other ways to do that, but i see how this "matrix chain" can be useful.

 

1 hour ago, Graf Zahl said:

You are forgetting something here: Someone has to maintain the code.

FOSS community. we're doing excellent work on linux, i can't see why it can be done with OpenGL. actually, as Khronos basically abandoned OpenGL, i foresee comunity-driven OpenGL implementation on top of Vulkan/whatever in the future.

 

1 hour ago, Graf Zahl said:

I am sorry to say, but this is actually a very, very lazy justification for letting others do your work.

it is not my fault that GPU vendors don't want to FOSS their code. i am here, ready to jump in, write features i want, and support them. where's the code i can work with?!

 

1 hour ago, Graf Zahl said:

to the driver

you keep talking about "driver", as if it has anything in common with OpenGL. there are no drivers with OpenGL implementation insde them -- it would be completely insane to drag OpenGL into kernel: it is userland library. and it is implemented as userland library in all drivers out there. it has nothing to do with drivers implementation -- except using proprietary undocimented driver API and GPU commands, of course. but i can't care less about that: if vendors want to sell me their GPUs, they first should give me good OpenGL implementation. i don't care how they'll do it. they definitely don't need help with it -- that's why it is proprietary closed source. ok, i got the message. now, gimme my OpenGL! ;-)

 

1 hour ago, Graf Zahl said:

How much easier can it get?

not downloading GLM. it way easier. besides, what if i want pure C? yes, there are people who are using pure C. or another language where i don't have handy algebra library? with that library in OpenGL, i can use it from C without looking for other soultions, and i can use it from almost any other language out there by just porting relevant header files.

Share this post


Link to post
12 hours ago, ketmar said:

i am never talking about that. ;-)

 

using more transformation matrices without writing shaders, and conditionally build final matrix on GPU. basically, offloat matrix multiplication to GPU, and have a way to selectively turn each matrix in a chain on/off. of course, there are other ways to do that, but i see how this "matrix chain" can be useful.

 

Yeah, understandable. But you also have to understand that this kind of 3D programming is dead. Has been dead for many, many years. OpenGL trying to keep this programming model alive was one of the main reasons why it fell out of favor so many years ago and let D3D score an easy win.

 

Quote

 

FOSS community. we're doing excellent work on linux, i can't see why it can be done with OpenGL. actually, as Khronos basically abandoned OpenGL, i foresee comunity-driven OpenGL implementation on top of Vulkan/whatever in the future.

 

Why? Hasn't it sunk in yet that the niche depending on this legacy stuff has no critical mass? Not even in non-game programming. Even for productivity software all that shader functionality is essentially required to develop products that meet modern users' needs.

Legacy OpenGL is the ONLY still available 3D API that hasn't abandoned fixed function. All others have tossed this out more than 10 years ago.

 

What this is great for is quick mock-ups, but that's essentially the extent of it. It cannot even do Doom's lighting model correctly.

 

 

Quote

 

it is not my fault that GPU vendors don't want to FOSS their code. i am here, ready to jump in, write features i want, and support them. where's the code i can work with?!

 

In case you haven't noticed yet, on Linux there's Open Source drivers for both AMD and Intel. Not for NVidia yet, and certainly also not for 10 year old hardware, only for modern systems. Of course these mainly target modern OpenGL features and Vulkan because that's what most graphics programmers want to use.

 

I personally do not see OpenGL 2 to go away any time soon completely - there's simply too much software out there depending on it, so where's the problem? You want to stick with the old - you can. But since it's old it has essentially been feature-frozen and that's something that won't ever change.

 

Quote

 

you keep talking about "driver", as if it has anything in common with OpenGL. there are no drivers with OpenGL implementation insde them -- it would be completely insane to drag OpenGL into kernel: it is userland library. and it is implemented as userland library in all drivers out there. it has nothing to do with drivers implementation -- except using proprietary undocimented driver API and GPU commands, of course. but i can't care less about that: if vendors want to sell me their GPUs, they first should give me good OpenGL implementation. i don't care how they'll do it. they definitely don't need help with it -- that's why it is proprietary closed source. ok, i got the message. now, gimme my OpenGL! ;-)

 

Here you go: https://www.mesa3d.org/

Also, it doesn't matter it the code resides in Kernel or user space - it's still there, it still has to be programmed, compiled and maintained. So it still bloats the driver package for something that should really be a library on the user side mainly.

 

 

Quote

 

not downloading GLM. it way easier. besides, what if i want pure C? yes, there are people who are using pure C. or another language where i don't have handy algebra library?

 

Google is your friend:

 

C: https://stackoverflow.com/questions/4501322/c-libraries-for-mathematical-matrix-operations

Java: https://stackoverflow.com/questions/10815518/java-matrix-libraries

Rust: https://github.com/rustsim/nalgebra

 

I haven't checked their content, but that search took me less than a minute.

 

 

EDIT:

 

Here's a link I found regarding the topic of these old, bloated APIs and why they are now left behind. It's a really interesting read:

https://www.gamedev.net/forums/topic/666419-what-are-your-opinions-on-dx12vulkanmantle/?tab=comments#comment-5215019

 

BTW, these new APIs like Vulkan and D3D12 are seeing the fastest adoption of new graphics interfaces ever. Why? Because they finally strip away all the stuff that causes bugs and performance regressions.

 

Edited by Graf Zahl

Share this post


Link to post
16 hours ago, Graf Zahl said:

But you also have to understand that this kind of 3D programming is dead.

it is not dead. as i said before, gamedevs aren't the only people on the planet who needs to do gfx with GPU. if anything, i can't care less about needs of game developers -- they have whole Khonos, NVidia, AMD, and others thinking about that. so we, simple people with simple needs, have to thing for ourselves now. ;-)

 

16 hours ago, Graf Zahl said:

Why? Hasn't it sunk in yet that the niche depending on this legacy stuff has no critical mass? Not even in non-game programming.

because it is convenient. Khronos tried to kill it instead of repairing (and succedeed for a short term), but remember, no good thing ever dies! ;-) of course, when Khronos declared something as "obsolete", people started looking for "non-obsolete" solutions. which means less tutorials with "obsolete" stuff, and less people are even know that "obsolete" stuff is not that obsolete, and may be exactly what they need.

 

this usually works for some time, but then people start looking for something simplier, and they will rediscover Good Old APIs. because "close to metal, without all the stuff we don't need, we will implement it ourselves" is not a one-size-fits-all solution.

 

16 hours ago, Graf Zahl said:

In case you haven't noticed yet, on Linux there's Open Source drivers for both AMD and Intel.

let's be honest: Intel is not a GPU that worth noting at all, and AMD drivers are crap. and i am not talking about FOSS drivers at all, i am talking about OpenGL implementations on top of driver API. those OpenGL implementations are huge, they take alot of resources to support, and i believe that FOSS community can help. of course, it means that GPU vendors will have to make their internal specs and documentation available to public too. but they prefer to moan about all the hard work they have to do to support "obsolete stuff", like it wasn't their own choice. i have zero sympathy here.

 

17 hours ago, Graf Zahl said:

But since it's old it has essentially been feature-frozen and that's something that won't ever change.

unless we'll say "fuck you, Khronos!" this is where Vulkan can be useful for us as a common low-level layer that can be used to implement universal, vendor-independent "classic OpenGL" implementation. there is already Mesa code we can start with.

 

of coruse, i know about Mesa. may be a good starting point (or may not, i don't know yet). if i'll be forced to start such project myself, i'll take a closer look.

 

17 hours ago, Graf Zahl said:

I haven't checked their content, but that search took me less than a minute.

compare this:

1. google for it.

2. take some time to evaluate it.

3. download it.

4. compile and install it / try to integrate it in your source tree.

with

1. just use it, as everything is already there.

 

i cannot see how 1-2-3-4 can win over just 1. ;-)

 

17 hours ago, Graf Zahl said:

interesting read

 

17 hours ago, Graf Zahl said:

are seeing the fastest adoption of new graphics interfaces ever. Why?

as i said before several times -- because vendors doesn't want to cooperate. you're trying to convince me that for vendors it is cheaper to implement thinner API, but i never objected to that! ;-) if anything, you're only reiterating on my point -- "FOSS or STFU!" ;-)

Share this post


Link to post
8 hours ago, ketmar said:

it is not dead. as i said before, gamedevs aren't the only people on the planet who needs to do gfx with GPU. if anything, i can't care less about needs of game developers -- they have whole Khonos, NVidia, AMD, and others thinking about that. so we, simple people with simple needs, have to thing for ourselves now. ;-)

 

Do I have to repeat that in other fields the same holds true? I work with 3D developers on a daily basis in a non-game-related job and these people unanimously tell me that it's "dead", only old legacy software that's too crusty to be ported still depends on it.

 

 

Quote

 

because it is convenient. Khronos tried to kill it instead of repairing (and succedeed for a short term), but remember, no good thing ever dies! ;-)

 

So why is it dead then?

 

 

Quote

 

of course, when Khronos declared something as "obsolete", people started looking for "non-obsolete" solutions. which means less tutorials with "obsolete" stuff, and less people are even know that "obsolete" stuff is not that obsolete, and may be exactly what they need.

 

You are letting your personal bias get into the way of reality here!

 

The change wasn't driven by a committee but by the users of the API, the developers. If you look around a bit you'll see that Khronos wasn't the one starting this - they only reacted when Microsoft became successful with a much leaner and streamlined API that shed a lot of the baggage and embraced modern features. They built the leaner and more streamlined API because they listened to the concerns of graphics programmers, while Khronos listened to the concerns of business executives for far too long.

And again: Sticking to the old and crusty is the main reason why OpenGL drivers, aside from NVidia's flat out suck. The whole thing is just far too complex to allow writing an efficient driver. NVidia pours money into their OpenGL driver because aside from consumer graphics cards they also sell extremely expensive enterprise solutions. And if you know enterprise software, it very often is very, very old stuff that is being kept on life support that costs more than it would to redevelop the software, mainly because it is often also very badly written software. But this entire property makes this kind of software a very poor precedent for what is being considered "modern" and "up-to-date". Just like I learned a few weeks ago that one product a sister company of my employer develops is done in Visual FoxPro! They just went on and on and on with their original code and developed themselves into a corner where they probably won't get out anymore easily. It is very much the same with this enterprise software depending on old OpenGL, and you can bet that NVidia is being paid massive sums of money by these interested parties to keep support alive.

AMD and Intel are far, far less invested in this market segment and as a result their driver support is magnitudes weaker. They make sure that the handful of still existing old games works fine and that's it.

 

And pretty much the same happened with Vulkan. It was neither Microsoft nor Khronos starting this - they only jumped onto the bandwagon when AMD in cooperation with some developers showed the advantages of a low level API. They had to jump onto the bandwagon because if they hadn't, AMD would have run away with Mantle and owned the market.

 

There were no evil forces at work here, just normal market dynamics. The old fell out of favor because despite apparently being simpler on the surface it ultimately made things a lot more complicated and hard to develop for.

Just an example: In GZDoom we absolutely did not manage to implement an efficient way to upload the software rendered image to the GPU without a serious performance hit on some platforms. Why? Because the OpenGL texture interface gives no control over how often the texture image needs to be copied. Even if you use the supposedly fast path there's still no guarantee that it really is fast. It was different on hardware from all 3 vendors.

With Vulkan it's dead easy. You create a CPU-side staging buffer, render the image into and then queue an upload command, while the 2D content is being prepared by the CPU in parallel. Yes, it's surely more code but the added control really helps here with performance.

And sorry, this easily trumps everything a simpler-on-the-surface API can provide.

 

 

Quote

 

this usually works for some time, but then people start looking for something simplier, and they will rediscover Good Old APIs. because "close to metal, without all the stuff we don't need, we will implement it ourselves" is not a one-size-fits-all solution.

 

Dream on! Not even WebGL which is supposed to be a higher level abstraction went back to the Immediate mode API. It uses all the stuff you like to criticize, like requiring shaders and buffers to even work.

 

 

Quote

 

let's be honest: Intel is not a GPU that worth noting at all, and AMD drivers are crap. and i am not talking about FOSS drivers at all, i am talking about OpenGL implementations on top of driver API. those OpenGL implementations are huge, they take alot of resources to support, and i believe that FOSS community can help.

 

You know what's interesting about the FOSS community? If something is needed, they'll develop it. Nobody has ever wasted energy on trying to implement an old-style immediate mode API on either D3D, OpenGL core and Vulkan. Why? Because there's no real interest here.

 

Quote

 

of course, it means that GPU vendors will have to make their internal specs and documentation available to public too. but they prefer to moan about all the hard work they have to do to support "obsolete stuff", like it wasn't their own choice. i have zero sympathy here.

 

Both AMD and Intel are being supported by FOSS drivers on Linux. And the result? There's token OpenGL 2.1 support, everything modern requires a core profile and even that doesn't get full attention anymore with most energy invested into Vulkan. Their FOSS drivers also are a lot less efficient as the former proprietary solutions.

And with NVidia, making their driver Open Source would plain and simply kill their enterprise business model - where they get most of their money from. Currently they charge extortionate prices for commercially used graphics cards - and they still sell, because their driver is so much better than the competition.
 

 

Quote

 

unless we'll say "fuck you, Khronos!" this is where Vulkan can be useful for us as a common low-level layer that can be used to implement universal, vendor-independent "classic OpenGL" implementation. there is already Mesa code we can start with.

 

Guess what: That's the idea here! Let people develop the graphics API they need, not let their workflow be dictated by external forces.

 

 

Quote

of coruse, i know about Mesa. may be a good starting point (or may not, i don't know yet). if i'll be forced to start such project myself, i'll take a closer look.

 

compare this:

1. google for it.

2. take some time to evaluate it.

3. download it.

4. compile and install it / try to integrate it in your source tree.

with

1. just use it, as everything is already there.

 

i cannot see how 1-2-3-4 can win over just 1. ;-)

 

I'm sorry but what you describe here is normally given a very unpleasant name: "Laziness".

Normally you spend a day or two setting up a project and months, if not years, developing it. What do a few days more matter then? You should spend them anyway to make an informed evaluation of what you really need and then choose the best option - and not just select the first solution that's being spoon-fed to you.

 

Share this post


Link to post
37 minutes ago, Graf Zahl said:

I work with 3D developers on a daily basis

i work with 3D graphics on a dayly basis, and i am using "classic" OpenGL. it would be hard to use something that is "dead", isn't it?

 

37 minutes ago, Graf Zahl said:

So why is it dead then?

see above.

 

37 minutes ago, Graf Zahl said:

You are letting your personal bias get into the way of reality here!

i can assure you that i am not living in some alternate reality -- it would be hard for me to contact this one from there. ;-) also, i can assure you that i've seen people saying "fuck that 3d" after seeng the amount of things they're required to do to fire up "modern" OpenGL. i wouldn't even dare to show them Vulkan sample code. ;-) some of those people returned after i told them that "legacy" OpenGL is not dead, and they can use it.

 

37 minutes ago, Graf Zahl said:

The change wasn't driven by a committee but by the users of the API, the developers.

a very small set of developers. and you're keep telling me that there are no other devs who has needs to work with gfx. that is where i cannot agree.

 

of course, people who just "need the work done" won't be vocal, and won't submit proposals to ARB. they have plenty of other things to do. it doesn't mean that those people aren't exist, or that those people are few. and they usually don't give a shit about getting most FPS from GPU, they just need something simple, yet extensible and powerful enough for them to use. that is where "classic" OpenGL shines.

 

Khronos and GPU vendors neither know, nor care about that segment of consumers. i know, and i care -- after all, i am one of those ignored people.

 

37 minutes ago, Graf Zahl said:

There were no evil forces at work here, just normal market dynamics.

i didn't said that there are some evil forces at work. i just have a huge distaste for so-called "marked" as a whole.

 

37 minutes ago, Graf Zahl said:

With Vulkan it's dead easy.

ah, that's why Vulkan renderer in GZDoom is at least on second iteration, and still not there! that's because with Vulkan it is easy. i see.

 

37 minutes ago, Graf Zahl said:

WebGL

another complete garbage from garbage commitee. meh.

 

37 minutes ago, Graf Zahl said:

Nobody has ever wasted energy on trying to implement an old-style immediate mode API on either D3D, OpenGL core and Vulkan.

windows devs are generally eathing shit m$ feeds, and they don't like OpenGL per se. implementing "classic" OpenGL on top of "modern" has no sense, as they're both works (yet there are implementations of "classic" OpenGL on OpenGL ES). as for Vulkan, it is too young, and too hard to use for hobbysts devs to jump in. of course, there will be "classic" OpenGL implemenation on top of it later.

 

37 minutes ago, Graf Zahl said:

I'm sorry but what you describe here is normally given a very unpleasant name: "Laziness".

just imagine the situation where "doing some vector math and render some 3D" is not a primary task. and one may not even need the "best" option (whatever that means). you have the usual professional/seasoned programmer's bias here, thinking that other people's goal is to write code. but for many (if not most) people writing code is just a side goal that helps them to solve their main task. it is not lazyness, they just have different set of priorities. i myself is a hobbyst, and i am mostly writing code to solve my tasks. so i can imagine what a hobbyst may want. and i don't need "best code", i need something that is good enough for me. eh, this whole k8vavoom journey started only because nobody else jumped in to do what i want. ;-)

Edited by ketmar

Share this post


Link to post
1 hour ago, Graf Zahl said:

Just an example: In GZDoom we absolutely did not manage to implement an efficient way to upload the software rendered image to the GPU without a serious performance hit on some platforms. Why? Because the OpenGL texture interface gives no control over how often the texture image needs to be copied. Even if you use the supposedly fast path there's still no guarantee that it really is fast. It was different on hardware from all 3 vendors.

 

With Vulkan it's dead easy. You create a CPU-side staging buffer, render the image into and then queue an upload command, while the 2D content is being prepared by the CPU in parallel. Yes, it's surely more code but the added control really helps here with performance.

Actually, right now the vulkan branch creates an Image object in CPU mappable memory. It seems to be roughly matching the OpenGL performance here. However, what you are describing is one of the other possible options, along with the fact there are two transfer families available. The current method, and the OpenGL one, doesn't match the speed of the Direct3D 9 way used in ZDoom. At some point it has been my plan to try them all and figure out what the performance characteristics are to figure out which is most ideal.

 

You are of course right none of this is possible in OpenGL as you can't properly describe where buffers are stored or how many it creates. It is also worth mentioning that for integrated GPU's the current mapping method (on the vulkan branch) means no transfer has to be done at all. In short: selecting the right buffer location is actually quite complex - something OpenGL and Direct3D 11 never found a solution for.

 

38 minutes ago, ketmar said:

ah, that's why Vulkan renderer in GZDoom is at least on second iteration, and still not there! that's because with Vulkan it is easy. i see.

There is no second iteration. I created a second branch mainly because I still wanted to be able to reference the original. What happened here is that the master branch and the vulkan branch got so much out of sync that it was easier to create a new branch from master than merge the changes in.

 

Personally I blame a lot of the initial setback on the vulkan-tutorial.com tutorial. It is actually amazing how harder that tutorial makes it to learn Vulkan. It spends all its time showing you how not to arrange code (it picks a spaghetti solution almost every time) while not telling you much about the important parts of the Vulkan API. That I ended up finding the spec more easy to learn from the tutorial is a clear sign it failed at what it set out to do. :)

Share this post


Link to post
14 minutes ago, ketmar said:

i work with 3D graphics on a dayly basis, and i am using "classic" OpenGL. it would be hard to use something that is "dead", isn't it?

 

see above.

 

Enlighten me, please. Like I said, the graphics developers I talk to do not want to be bothered with this old stuff. A few still have to work with it because they work on old software but they don't like it.

 

So what are you doing with 3D that still works with the old stuff?

 

 

 

14 minutes ago, ketmar said:

 

i can assure you that i am not living in some alternate reality -- it would be hard for me to contact this one from there. ;-) also, i can assure you that i've seen people saying "fuck that 3d" after seeng the amount of things they're required to do to fire up "modern" OpenGL. i wouldn't even dare to show them Vulkan sample code. ;-) some of those people returned after i told them that "legacy" OpenGL is not dead, and they can use it. 

 

What are these people working on? I'm just curious. The 3D guys I know are all people who just love tinkering with this stuff and get enthusiastic about the possibilities of the new low level APIs. These are concerns I normally only get from people whose work has some occasional overlaps with 3D but not from full-time 3D developers.

 

In any sane environment that low level stuff should be wrapped around by some abstraction anyway, the same goes for any direct API calls in an area where this may need changing. I spent two months of hard work last year to fully abstract GZDoom's renderer from the API, because it was all still in a state dictated by the needs of OpenGL 2.1. So you ideally only have one or two developers who ever directly work with the API - the rest should be done at a higher level where all that nastiness does not apply. I wish I had done GZDoom like that from the start but it's just that the immediate mode is so temptingly easy to use that any such abstraction is easily forfeited for a very bad case of vendor lock-in.

 

In its current state I could even plug in a D3D renderer if I had a GLSL -> HLSL cross-compiler to use the shaders with it.

BTW, in GZDoom the OpenGL backend code right now is a mere 215 kb, most of which is abstract backing implementation with little high level logic for the game. The Vulkan backend is 275 kb, so it's not really that much larger.

 

 

 

 

14 minutes ago, ketmar said:

a very small set of developers. and you're keep telling me that there are no other devs who has needs to work with gfx. that is where i cannot agree.

 

I wouldn't say that. Maybe where you work - but where I work these skills are neither needed nor desired.

The company I work for develops software for the furniture industry, and they have no use for old-fashioned 3D, they want to present their stuff in the best way possible, that includes realistic lighting, shadows and texturing - all features that depend on how modern shader based APIs work. They use complex models that require vertex and uniform buffers throughout and generally depend on modern features.

 

 

 

14 minutes ago, ketmar said:

 

of course, people who just "need the work done" won't be vocal, and won't submit proposals to ARB. they have plenty of other things to do. it doesn't mean that those people aren't exist, or that those people are few. and they usually don't give a shit about getting most FPS from GPU, they just need something simple, yet extensible and powerful enough for them to use. that is where "classic" OpenGL shines.

 

Yes, these people exist, but again: If you work with 10 year old graphics concepts, your software looks 10 years old, even in the non-gaming industry you'll have a hard time competing with that.

 

 

14 minutes ago, ketmar said:

 

Khronos and GPU vendors neither know, nor care about that segment of consumers. i know, and i care -- after all, i am one of those ignored people.

 

Well, they neither know me, my employer or my colleagues, but we do not feel ignored - we feel well served. In our eyes, 3D programming is going exactly where it should go - allowing us to use the hardware to the fullest extent possible, not being constantly obstructed by insufficient high level APIs that abstract away all the things that would help us make things better.

 

 

2 minutes ago, dpJudas said:

 

Personally I blame a lot of the initial setback on the vulkan-tutorial.com tutorial. It is actually amazing how harder that tutorial makes it to learn Vulkan. It spends all its time showing you how not to arrange code (it picks a spaghetti solution almost every time) while not telling you much about the important parts of the Vulkan API. That I ended up finding the spec more easy to learn from the tutorial is a clear sign it failed at what it set out to do. :)

 

Oh yes, that tutorial was pure horror in retrospect. It should be preserved as a shining example of "How not to write software". None of the important things was done in a way that allowed easy code reuse and I was in the process of taking it apart and cleaning up, but preferred to abstract the hardware renderer first to reduce the low level work - and unfortunately after that was constantly short on time thanks to my job to resume work. And I believe that most of my original work (the few pieces that there were) has been reused because it did not use that spaghetti pattern.

 

Anyone taking that as a roadmap to set up Vulkan will be in for a very rough and unpleasant ride. I was quite surprised initially when I looked at the existing backend to find out how little of the coding madness that tutorial exhibited were present.

 

Share this post


Link to post
11 minutes ago, dpJudas said:

That I ended up finding the spec more easy to learn from the tutorial is a clear sign it failed at what it set out to do. :)

this is one of the biggiest problems with Vulkan, actually. while underlying concepts may be easy, nobody actually tried to clearly explain them. especiall for people who aren't in that 3d gfx business, but just want to render their toruses. ;-)

Share this post


Link to post
Guest
This topic is now closed to further replies.
×