Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
RetroAkaMe

What does Vulkan do differently than OpenGL?

Recommended Posts

From the perspective of an end-user, nothing. It should behave identically. Just with better performances on AMD and Intel GPUs.

 

Internally... from what I've heard, it's "closer to the metal", so to speak, to the way modern GPUs work. It doesn't have the legacy cruft that OpenGL accumulated from the few major redesigns it had on its way to version 4.x. However, it's also more finicky and more prone to crash on or complain about things that work just fine in OpenGL. It also requires more VRAM, though that shouldn't really be a problem in Doom.

Share this post


Link to post

Vulkan addresses more of the internal performance of a GPU by implementing lower level system calls versus OpenGL, which is more generic nature. You can go even lower and code on the GPU directly, but then we talk assembly and that kind of thing is usually reserved for the consoles (Or who use their own custom API's that effectively expose the hardware more.)

Share this post


Link to post

Vulkan is a different graphics API that allows developers to go "lower-level" than OpenGL. In English, that means that it takes more work for a programmer to put graphics on the screen, but they can do so with greater efficiency and more control over the final result, allowing for higher framerates and better visual effects than OpenGL. In addition, some operating systems like MacOS no longer support OpenGL.

 

As far as GZDoom goes, if you're not using a NVIDIA GPU you should probably use Vulkan if you have the ability to do so, as it'll work better and faster. On an NVIDIA GPU, there are some stability issues in GZDoom that aren't quite addressed yet (either by the engine or the GPU drivers... or both?), so you might be better off using OpenGL where possible.

Share this post


Link to post

I couldn't have said it better.

 

If you ask me, Vulkan is the epitome of taking the Human factor out of the equation and create something that can only be described as an utter mess.

Yes, it's closer to the hardware and yes, it does a few things right that OpenGL did wrong - but it also does a few other things terribly wrong all on its own.

 

Sadly, aside from NVidia nobody ever managed to get an efficient OpenGL driver working and thus we are saddled with the need to support such a horribly designed API.

 

Share this post


Link to post

No, the mess is because it is too close to the hardware and has to consider all existing hardware's quirks to work properly.

And as it turned out, some graphics hardware is just weird.

 

Share this post


Link to post
1 hour ago, Graf Zahl said:

No, the mess is because it is too close to the hardware and has to consider all existing hardware's quirks to work properly.

And as it turned out, some graphics hardware is just weird.

 

Heh, okay

Share this post


Link to post
19 hours ago, Graf Zahl said:

No, the mess is because it is too close to the hardware and has to consider all existing hardware's quirks to work properly.

And as it turned out, some graphics hardware is just weird.

Hardware makers: "Let's create a standardized API for displaying graphics."

 

Also hardware makers: "Let's make our adherence to that API kinda crap."

Share this post


Link to post

No. That conclusion is wrong.

 

The reality looks more like this:

 

Hardware makers: "Let's make hardware that very poorly maps to current APIs."

 

Also hardware makers: "Oops. This didn't work. Let's make an API that works better with our design."

 

Where you can substitute "hardware maker" with "AMD" blanketly.

NVidia has publicly stated that their hardware has no need for most of Vulkan's weird shenanigans (like image transitioning which is a major PITA for developers.)

 

Share this post


Link to post
On 10/9/2021 at 9:52 AM, RetroAkaMe said:

Why is it so? Bad organisation?

mostly because Vulkan is kind of "assembler language for GPUs". it is very low-level, and using GPUs more efficiently, but in the same time it is more tied to GPU architectures, and doesn't really try to help programmer. there is no way to make something both powerful, and "close to the metal". OpenGL is way more high-level and easier to work with, but it is slower due to this. over the time, Khronos keep adding more and more lower-level features to OpenGL, and the end result is… not really nice. so the new API was created, the one that throws away any attempts to be "high-level". basically, you can reimplement OpenGL with Vulkan, but not vice versa. yet programming with assembler is hard, and so is with Vulkan.

Share this post


Link to post

That is how the story usually goes. However, I'd more say Vulkan is like Itanium.

 

Itanium was a platform where the idea was that if all the work of micro ops was moved to the compiler then theoretically that should produce faster programs. The idea was bonkers for two simple reasons. First it assumed that CPU internals wouldn't change - which generation of Itanium should your compiler target? Secondly it assumed perfect knowledge from the compiler - only if the compiler team understands the CPU internals perfectly will Itanium be able to beat x64 instructions.

 

Ultimately Vulkan has the same misunderstood idea for low level. By pushing it all on the game developers you'll get stuff inefficient for the actual GPU, but now, thanks to being more low level than OpenGL, there is less room for the driver to optimize stuff. What they really should have done is keep it higher level, but given developers a proper way to give the display driver the guarantees it needs to make the optimizations that can't be done using OpenGL/D3D11.

Share this post


Link to post

Now that's a comparison I haven't thought of. But it's so true - Vulkan is so explicit about some truly inane things, plus that insanity called 'image transition' that it is only a matter of time until somebody develops a new GPU that simply does not work right with the explicit-ness of how some things need to be done.

 

Thinking about it, the only good thing in Vulkan is the command buffers. I could also accept the need to create pipeline objects, but where things stop being fun is how texture creation works and even more how textures and buffers are supposed to be *used*. What's the point of these descriptor sets? The entire setup here makes it virtually impossible to write efficient code! If I wanted to create a combined resource of some texture players plus attached buffers I'd want to do it very differently.

 

It's indeed a repeat of the Itanium in some ways where 'smart' people thought that software is good at solving problems better delegated to hardware. I mean, what's so bad about the old resource binding model? Apparently that's what NVidia had always implemented in hardware - it's just that AMD must have done some major, major screwup with GCN to require such a monstrosity of API...

 

 

Share this post


Link to post

Maybe Vulkan code is meant to be written as the lower-level layer of a generic graphics library? And graphical (or GPU computing) app programmers should just use that library instead of Vulkan directly…

Share this post


Link to post

I think the comparisons to Itanium are a little bit incorrect for one simple reason: People are actually using Vulkan.

 

Itanium basically died in the cot. Intel and HP pushed it, and... that was it. HP got it started, Intel got snared in, they pushed it to the moon, and then AMD did the natural thing and extended x86, then Microsoft cajoled Intel into x64 into their Xeons instead of IA-64. Itanium had a sudden heart attack and never recovered. About a dozen manufacturers made Itanium hardware, but five of them were out by the time 2010 rolled around, three more dropped by 2012, another three sometime before 2015, and HP alone carried the torch from then until the product's final demise in July of this year(!).

 

Vulkan, on the other hand? It's here, it's now, it's everywhere. Got a cross-platform game? You're almost certainly doing a Vulkan renderer if you want that running on Linux. I'd also suspect that the consoles' proprietary APIs are also more close to something like Vulkan than D3D (especially as they have the benefit of a fixed hardware spec... or two, given our penchant as of late with eventually coming out with hardware refreshes).

 

Vulkan may do a lot of dumb things, but it's definitely more deployed than Itanium ever got to be. And so that means it's probably here to stay and the developers are just going to have to work with it - or stick only to Windows or consoles. But given how hard Valve is pushing Proton and the Steam Deck (which probably won't fizzle to nearly the extent some think it will), it's definitely in their best interests to have a native port rather than rely on Proton to do the translation. Otherwise we just end up in a D3D world again.

Share this post


Link to post
50 minutes ago, Graf Zahl said:

It is very clear that Vulkan's designers completely ignored the needs of non-expert programmers.

I do remember you thought it was the final coming of Christ for your port. What made you change this?

Share this post


Link to post

Working with it?

 

I have to accept that the so-called 'experts' consider it the future but working with Vulkan just creates such totally fucked up code that it's very hard to work with it or to refactor it if the need arises.

 

Ever wondered why the Vulkan backend still has some stability issues?

 

Share this post


Link to post
15 minutes ago, Dark Pulse said:

I think the comparisons to Itanium are a little bit incorrect for one simple reason: People are actually using Vulkan.

 

That doesn't tell much. They work with Vulkan because there is nothing better as AMD made sure that D3D12 got the same insanity implanted.

 

 

15 minutes ago, Dark Pulse said:

Itanium basically died in the cot. Intel and HP pushed it, and... that was it. HP got it started, Intel got snared in, they pushed it to the moon, and then AMD did the natural thing and extended x86, then Microsoft cajoled Intel into x64 into their Xeons instead of IA-64. Itanium had a sudden heart attack and never recovered. About a dozen manufacturers made Itanium hardware, but five of them were out by the time 2010 rolled around, three more dropped by 2012, another three sometime before 2015, and HP alone carried the torch from then until the product's final demise in July of this year(!).

 

It's not that simple. Some people genuinely believed that the idea behind it was sound. The error they made was not to stop when it became evident that it's not going to work.

 

But we are not talking about Itanium's failure but about the basic concept behind it, and that has some striking similarities with Vulkan (i.e. simplify the driver/hardware by pushing all the implementation details into user space - and that is never *ever* a good idea for hardware/driver design because it totally blocks all paths to future optimizations. Imagine some new 3D hardware coming up with a radically simplified resource interface where you can just plug together the components like in a construction kit and it'd 'just' work without any overhead whatsoever. That hardware will never be able to harness its full power with an API as rigid as Vulkan - so one of two outcomes is inevitable:

1) New hardware gets designed to fit into Vulkan's mode of operations and will eventually run into some wall where optimization is blocked by the limitations of the API.

2) A new, more fitting API needs to be designed.

 

 

 

15 minutes ago, Dark Pulse said:

 

Vulkan, on the other hand? It's here, it's now, it's everywhere. Got a cross-platform game? You're almost certainly doing a Vulkan renderer if you want that running on Linux. I'd also suspect that the consoles' proprietary APIs are also more close to something like Vulkan than D3D (especially as they have the benefit of a fixed hardware spec... or two, given our penchant as of late with eventually coming out with hardware refreshes).

 

OpenGL thrived for more than 25 years. It was there, it got used, it still was a shitty API from its very start because its state machine model was just a brain-dead stupid concept for a system library.

On the same note, C++ streams have been in use for more than 30 years. They are still a shitty API that depends on very broken design. It also won't go away, despite being irreparably broken.

 

So, being used does not negate being broken.

 

15 minutes ago, Dark Pulse said:

Vulkan may do a lot of dumb things, but it's definitely more deployed than Itanium ever got to be. And so that means it's probably here to stay and the developers are just going to have to work with it - or stick only to Windows or consoles. But given how hard Valve is pushing Proton and the Steam Deck (which probably won't fizzle to nearly the extent some think it will), it's definitely in their best interests to have a native port rather than rely on Proton to do the translation. Otherwise we just end up in a D3D world again.

 

We'll end up in D3D world anyway. Because I fully expect Microsoft to release a new D3D API if the need arises, while Vulkan will become stuck in 'design by committee' land as soon as breaking changes need to be discussed. Remember OpenGL 3.0? It was supposed to be the next gen graphics API bur committee made sure it landed with a whimper.

 

Share this post


Link to post

OpenGL 3.3 with its Core context took so much code to set up just to draw a single triangle that I decided to avoid returning to it ever.

 

Frankly at the rate the graphics APIs are evolving I feel that the average programmer won't to be able to pick up graphics programming without resorting to API abstractions that abstract away all of those verbose details of the underlying APIs.

Share this post


Link to post

The only real issues with the core context are that you a) need a vertex buffer and b) need a shader. Which is what virtually every program that's doing more than a single triangle would have anyway.

 

8 minutes ago, Cacodemon345 said:

 

Frankly at the rate the graphics APIs are evolving I feel that the average programmer won't to be able to pick up graphics programming without resorting to API abstractions that abstract away all of those verbose details of the underlying APIs.

 

I think that part is inevitable. Because the alternative would be APIs as dumbed down as original OpenGL which would be incapable of getting any performance out of the hardware.

The main problem with OpenGL's initialization is the system-side code anyway, if you want to go the whole nine yards by yourself instead of using a windows abstraction library like GLUT or GLFW.

 

The main problems with Vulkan isn't the verbosity anyway, it's that it exposes several rather rigid traits of certain hardware at the API level in a way that makes it extremely cumbersome to set this whole shit up.

The verbosity is something normally hidden behind some quickly written abstraction so that the next time around you can reuse what you got already.

 

 

Share this post


Link to post
48 minutes ago, Cacodemon345 said:

OpenGL 3.3 with its Core context took so much code to set up just to draw a single triangle that I decided to avoid returning to it ever.

 

Frankly at the rate the graphics APIs are evolving I feel that the average programmer won't to be able to pick up graphics programming without resorting to API abstractions that abstract away all of those verbose details of the underlying APIs. 

 

OpenGL 3 is really not that hard to learn.  GL3 Core was the first graphics API I ever learned and after a day or two I finally figured out what I was doing it was great.  There are a bunch of not great tutorials out there, but I used this one and can vouch for it.  I've since had to learn WebGL as well as some old version of OpenGL ES and it annoys me how much stuff is missing as well as the limitations of the fixed pipeline.

 

That being said, I think the idea is that in 2020, the need to write raw graphics code is much less than it used to be.  I imagine most folks either trust the engine they're using like Unreal/Unity/Godot, or they use a simplified API wrapper like BGFX that gives library users a consistent simplified API that maps to multiple back-end API's.

Share this post


Link to post
15 minutes ago, AlexMax said:

That being said, I think the idea is that in 2020, the need to write raw graphics code is much less than it used to be.  I imagine most folks either trust the engine they're using like Unreal/Unity/Godot, or they use a simplified API wrapper like BGFX that gives library users a consistent simplified API that maps to multiple back-end API's.

 

That may be true, but for some things an existing engine may not be the right thing and so far most of these wrapper libraries seem to make the same mistake of trying to support everything and the kitchen sink with a lowest common denominator API instead of just ditching all the obsolete crap and focus on what's really important.

For example, BGFX still supports D3D9 or OpenGL 2 which for me is an immediate red flag - it is virtually guaranteed that with such wide support it may run into issues with D3D12 or Vulkan.

 

Share this post


Link to post
23 hours ago, Graf Zahl said:

For example, BGFX still supports D3D9 or OpenGL 2 which for me is an immediate red flag - it is virtually guaranteed that with such wide support it may run into issues with D3D12 or Vulkan.

 

When you say "obsolete crap" are you talking about the fixed-pipeline stuff?  Or are you saying that the concept of the GL3 workflow of vertex buffers/index buffers/shaders is also obsolete?  Obviously with a shim library you're not going to squeeze every ounce of performance out of the underlying API, but I figure as long as you're not glBegin-ing your triangles it should be fast enough for most folks, and bgfx uses a programmable-pipeline approach from the start.

Share this post


Link to post

I mainly mean depending on APIs that old that the only hardware that'd benefit from their support is so poor that its support is more or less pointless.

 

Take OpenGL 2, for example. You only need that for a shrinking number of old and *extremely* weak laptop GPUs that wouldn't be able to run any semi-demanding software acceptably anyway, while the API is missing crucial features that are important to comfortably work with more current APIs. So there will have to be compromises here that may not be worth the trade-off.

 

The plain and simple fact is that if you need to support all these APIs and want to do it with a single set of code, you either cripple the old API or the new one, there's no way around it.

 

58 minutes ago, AlexMax said:

Obviously with a shim library you're not going to squeeze every ounce of performance out of the underlying API,

 

You need some sort of shim anyway unless you decide to target one single external API and nothing else.

Even GZDoom which does not use an external library puts some intermediate abstract layer in there. So the ultimate trade-off won't be that bad.

 

 

Share this post


Link to post

BGFX also has a problem with requiring platform-specific code to actually initialize the library which is an instant no-go for me. Unfortunately I didn't find anything else that abstracts away the 3D graphics APIs while maintaining universal portability like SDL2.

Share this post


Link to post
2 hours ago, Cacodemon345 said:

BGFX also has a problem with requiring platform-specific code to actually initialize the library which is an instant no-go for me. Unfortunately I didn't find anything else that abstracts away the 3D graphics APIs while maintaining universal portability like SDL2.

 

SDL2 and BGFX are not in the same niche.  In fact, they can be used together.

 

https://github.com/pr0g/sdl-bgfx-imgui-starter

 

Besides, SDL was never really an all-in-one solution unless you stuck to software rendering or the basic abstracted hardware-driven primitives it gave you.  Even basic multiplatform OpenGL you had to bring in a GL loader at minimum.

Share this post


Link to post

I am talking about stuff like this when I refer to platform-specific code to initialize a library:


#if BX_PLATFORM_WINDOWS

pd.nwh = wmi.info.win.window;

#elif BX_PLATFORM_OSX

pd.nwh = wmi.info.cocoa.window;

#elif BX_PLATFORM_LINUX

pd.ndt = wmi.info.x11.display;

pd.nwh = (void*)(uintptr_t)wmi.info.x11.window;

#elif BX_PLATFORM_EMSCRIPTEN

pd.nwh = (void*)"#canvas";

#endif // BX_PLATFORM_WINDOWS ? BX_PLATFORM_OSX ? BX_PLATFORM_LINUX ?

// BX_PLATFORM_EMSCRIPTEN

Share this post


Link to post

Then I suppose I don't really understand why you object to something that seems so simple, compartmentalized (limited to setting platform struct values), and already done for you.  If it bothers you, I'd just shove it in a function - or multiple functions where only the correct definition for the given platform is exposed, which is something you'd need anyway for other functionality that neither library covers properly.   If you need to add a new platform, you just add a define, figure out what window handle the library expects, and then just set it correctly.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×