Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

flohofwoe

This is a fairly well known project which fixes one of Vulkan's greatest shortcomings (some might say the lack of resource memory management is one of Vulkan's greatest features though), but I wonder if there are alternatives which provide most of the critical features but with a much smaller footprint. VMA is around 20kloc, which is about the same as jemalloc (23kloc). A general purpose allocator like jemalloc is overkill for many situations, but there are much smaller (yet slower) alternatives like Emscripten's emmalloc (which is just 1.4 kloc: https://github.com/emscripten-core/emscripten/blob/main/syst...).

Are there similar smaller alternatives for VMA?

As for the motivation: my 3D API wrapper around OpenGL, D3D11, Metal and WebGPU clocks in at 15kloc for all 3D backends, I'm hesitant to add a Vulkan backend exactly for problems like doing my own memory management for Vulkan resources. If I would integrate VMA, this would more than double the line count just for the memory management of a single 3D backend which simply doesn't seem "right". See: https://github.com/floooh/sokol/blob/master/sokol_gfx.h

q3k

V-EZ [1] was kinda supposed to be that, a wrapper that makes Vulkan easier to use in non-über-performance-critical applications, but it seems to be dead [2]... Well, maybe not smaller, but at least standardized to the point where you would expect it to be present as a system dependency.

[1] - https://github.com/GPUOpen-LibrariesAndSDKs/V-EZ

[2] - https://github.com/GPUOpen-LibrariesAndSDKs/V-EZ/issues/73

Narishma

If performance is not critical, why not just use something simpler like OpenGL or D3D11?

flohofwoe

One problem is that tooling support around OpenGL and D3D11 is quickly rotting away, because the GPU vendors and Microsoft focus on Vulkan and D3D12.

fouric

D3D/DirectX is Windows-only, while Vulkan is supposed to be replacing OpenGL, not complementing it. Maintaining three different graphics APIs (DirectX, OpenGL, Vulkan) is always harder than maintaining only two (DX+Vulkan (eventually)).

pjmlp

I consider Vulkan to have redone the same design by committee mistakes from OpenGL, and the disregard by graphics programming newbies, like the proprietary APIs offer (including console ones).

Ironically WebGPU is what Vulkan should have been in first place, from my point of view.

skohan

Vulkan was never intended to be for "graphics programming newbies". It's intended to be the thing you would build a more developer-friendly API on top of.

Jweb_Guru

Look at the state of Vulkan drivers in practice, the "fastest" ones in benchmarks of actual AAA games do tons of work that was supposed to be done in userspace (like, that was the whole point!). I agree in principle that it's important to have an API expose all the ugly low level details, but doing so and then telling people not to actually use it is pretty much obviously going to result in a suboptimal compromise like what we have today. Of course people are going to try to code against vanilla Vulkan; I think things would be different if it had shipped side by side with something like a robust implementation of WebGPU, so people who weren't able to use big name game engines had something to fall back on, but that's not what happened.

fctorial

You don't need to be a graphics programming veteran to learn vulkan either. The official spec is quite approachable even for newbies.

pjmlp

When Khronos states that OpenGL is done, newbies are going to try to learn an API that is actually getting updates, regardless of its intended target.

Jweb_Guru

A rare moment where we completely agree... Vulkan drivers end up implementing all sorts of workarounds that undermine the "low level" nature of the API because people can't use them efficiently. e.g. the Mesa driver spawning a thread per queue to actually perform the submission despite the spec being intended to have the user control threading.

adwn

> I consider Vulkan to have redone the same design by committee mistakes from OpenGL

Which are? (I'm genuinely interested)

pjmlp

Extension spaghetti for starters, https://vulkan.gpuinfo.org/listextensions.php

Followed by Khronos refusal that it isn't part of their job to define an SDK, so each newbie has to go through the ritual of passage to learn how to get OS abstraction libraries to show up a 3D accelerated window, math library, font handling library, texture and image loading library, shader compiling infrastructure, scene graph to handle meshes,....

Now there is LunarG SDK, which still only offers a subset of these kind of features.

If it wasn't for NVidia's early C++ efforts, Vulkan would still be C only.

Also Vulkan only exists because AMD was so kind to contribute Mantle as starting ground, otherwise Khronos would most likely still be arguining how OpenGL vNext was supposed to look like.

Really, in 21st century if you want to write portable 3D code just use a middleware engine, with plugins based backend.

bitwize

I kind of wish Project Fahrenheit had succeeded, and SGI had scrapped OpenGL back in the 90s in favor of Direct3D.

Can we please just make Direct3D the standard and get the open source community to support it instead of Khronos khruft?

SimilarGeneral1

Microsoft had plenty of time, before and after they started their "Microsoft loves open source" marketing campaign, to open source it or standardize it in some form. It's core to their vendor lock-in strategy so it will never happen. Makes you wonder why they are buying so many Vulkan related companies and game studios.

skohan

Why don't we all just adopt Vulkan which is:

1. Already open source

2. Well documented

3. Extremely powerful, as demonstrated by the work at Idtech

pjmlp

Even Carmack later corrected his point of view on DirectX, around version 8 timeframe.

skohan

How complex is your renderer? It's not that hard to do memory management manually with Vulkan for simple rendering pipelines. I think it gets harder when you want to have a lot of dynamism going on.

gmueckl

VMA doesn't solve a lot of hard problems around memory management, which really come from synchronization issues. You need to make sure that the GPU is done with a resource before you can destroy it on the CPU side. What it brings is a host of allocation algorithms for very specific use cases and a lot of code for tracing/debugging memory usage. Plus, it shields the users from some positively braindead quirks that the Vulkan spec has gathered (some through extensions that create traps when they aren't enabled - just present; I had to scream when I pieced that together...).

I have my own slightly anemic memory manager code that implements a basic scheme with no frills, avoids pitfalls and fits into about 500 lines of code. The only thing I really might want to improve is the some of the free lost handling. The rest shoukd be good for quite a while.

skohan

Yeah I don't even really have a system for this; I basically have the mostly statically managed memory which is living for the length of the scene/program, and for per-frame stuff I just wait for fences on a cleanup thread.

flohofwoe

It's a somewhat general 3D API wrapper which exposes an API that's similar to Metal and WebGPU, but with a number of restrictions because it needs to support GLES2/WebGL backends as the worst case).

One idea I'm playing with is to provide callback hooks so that resource allocation and management can be delegated to the API user (so they can for instance integrate VMA themselves), and only provide a rudimentary default solution (which probably would be enough for many simple use cases).

mkoubaa

Why do you care so much about LOC of third party code?

flohofwoe

Long story short: because of the terrible dependency management situation in the C/C++ world. Also once you commit to an external dependency, it becomes your own maintenance problem, the less code the better in this case.

Ostrogodsky

Somewhat related I hope. Does anyone know a resource guide to learn methodically about GPUs? Let me see if I can explain my frustrations:

1. The usual recommended books for beginners, although good miss what I need, yes I love building ray-tracers and rasterizers but I can finish the book and not have the slightest idea about how a GPU actually works

2. Books like H&P although excellent, treat GPUs as an after-thought in 1 extra chapter, and even the content is like 5-10 years behind.

3. The GPU gems series are too advanced for me, I get lost pretty quickly and quit in frustration

4. Nvidia, AMD resources are 50% advertising, 50% hype and proprietary jargon.

I suppose what I want does not exist, I want a guide that starting from a somewhat basic level (let's say assuming the reader took an undergraduate course in comp architecture) methodically explains how the GPU evolved into a complete separate type of computing architecture, how it works in the nitty gritty details, and how it is been used in different applications (graphics,ML,data processing, etc)

raphlinus

I agree strongly with you about the need for good resources. Here are a few I've found that are useful.

* A trip through the Graphics Pipeline[1] is slightly dated (10 years old) but still very relevant.

* If you're interested in compute shaders specifically, I've put together "compute shader 101"[2].

* Alyssa Rosenzweig's posts[3] on reverse engineering GPUs casts a lot of light on how they work at a low level. It helps to have a big-picture understanding first.

I think there is demand for a good book on this topic.

[1]: https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-...

[2]: https://github.com/googlefonts/compute-shader-101

[3]: https://rosenzweig.io/

Ostrogodsky

Thank you, I will check them out. I remember having read an article by the smart young lady, I didnt understand half of it, hopefully I will get more this time..

flohofwoe

Here's a somewhat recent blog post I stumbled over which might be helpful:

https://rastergrid.com/blog/gpu-tech/2021/07/gpu-architectur...

There isn't a lot of actual under-the-hood information though, because GPUs are closed IPs. So the information needs to be pieced together from the occasional conference talks, performance optimization advice from GPU vendors and what enthusiasts reverse engineer by poking GPUs through the 3D APIs.

Ostrogodsky

Thanks. Ok, hear me out because here it comes naive time. GPU demand will continue to grow exponentially in this decade (VR, Crypto or whatever remains from it,ML,Data Eng,Steam Deck, Laptops etc)Wouldnt it be possible for some multi-country/university/companies to create a totally open GPU specification ? That has happened already? I understand we are talking about a long time of research effort and billions $$ but I think the benefits for all would be incredible. Open hardware, open libraries, open drivers. Imagine a world with no Linux, a totally closed x86 fully owned by IBM, closed webGL. Where can I read more about efforts in this direction if they exist?

HelloNurse

What would be the point, besides an unprecedented, enormously ambitious hardware design learning project?

Executing shaders better than Nvidia and AMD is not likely.

Selling good graphics adapters at competitive prices to concrete users is even less likely.

Experiments with APIs would have a fatal adoption problem.

Avoiding DRM, if legally feasible, would be less useful than spending the same resources to support Sci-Hub or improve laws.

And of course for the more practical purpose of writing mere software, including Vulkan implementations, specifications are complete and open enough.

randomNumber7

I found this very helpful. It's a 140 pages book and explains relatively abstract how modern gpu hardware works and the programming model.

General-Purpose Graphics Processor Architectures (Tor M. Aamodt)

https://skos.ii.uni.wroc.pl/pluginfile.php/28568/mod_resourc...

> 4. Nvidia, AMD resources are 50% advertising, 50% hype and proprietary jargon.

I found the Nvidia CUDA C Programming Guide very helpful...

raphlinus

Thanks, I was previously unaware of this reference. On a quick skim, it seems to be more of an outline of interesting research directions for GPU architecture than a synthesis of where things are, targeted at programmers. But it has lots of detail and is likely to be useful to lots of people!

Ostrogodsky

The book looks pretty great!,pretty much in spirit to what I wanted. It is very slim but it has a big bibliography so it is perfect as an initial roadmap.

undefined

[deleted]

flqn

Is this page completely broken for anyone else? After the fade-in animations, the whole page vanishes. Using Chrome, btw.

jbverschoor

I hope this is not how the allocator works.. Allocate, memory gone

mnw21cam

+1, though I don't see a fade, just a white stripe at the top and a long empty grey page below it.

ricardo81

It worked for me earlier, on FF 78, Linux.

Loading it just now, I'm seeing what you see.

dexterhaslem

refreshed and its gone, darn

Daily Digest email

Get the top HN stories in your inbox every day.

Vulkan Memory Allocator - Hacker News