Brian Lovin
/
Hacker News
Daily Digest email

Get the top HN stories in your inbox every day.

fxtentacle

"Inspired by the gaming world, we realized that the only way to achieve the performance we needed was to build our own UI framework"

I'm surprised you did not look at "Dear ImGui", "Noesis", and "JUCE". All three of them are heavily used in gaming, are rather clean C++, use full GPU acceleration, and have visual editors available. Especially JUCE is used for A LOT of hard-realtime professional audio applications.

"When we started building Zed, arbitrary 2D graphics rendering on the GPU was still very much a research project."

What are you talking about? JUCE has had GPU-accelerated spline shapes and SVG animations since 2012?

BTW, I like the explanations for how they use SDFs for rendering basic primitives. But that technique looks an awful lot like the 2018 GPU renderer from KiCad ;) And lastly, that glyph atlas for font rendering is only 1 channel? KiCad uses a technique using RGB for gradients so that the rendered glyphs can be anti-aliased without accidentally rounding sharp corners. Overall, this reads to me like they did not do much research before starting, which is totally OK, but then they shouldn't say stuff like "did not exist" or "was still a research project".

Jasper_

When we talk about 2D graphics as a research problem, we're talking about native rendering of splines and strokes. JUCE does not have GPU-accelerated splines, it flattens the path to lines and rasterizes the coverage area into a texture that then gets uploaded to the GPU:

https://github.com/juce-framework/JUCE/blob/2b16c1b94c90d0db...

https://github.com/juce-framework/JUCE/blob/2b16c1b94c90d0db...

It also does stroke handling on the CPU:

https://github.com/juce-framework/JUCE/blob/2b16c1b94c90d0db...

Basically, this isn't really "GPU accelerated splines". It's a CPU coverage rasterizer with composting handled by the GPU.

fxtentacle

You linked to the software fallback renderer which can be used for cross-platform compatibility. But JUCE also has platform-specific rendering modules.

CoreGraphicsContext::createPath will convert the CPU spline segments to CG spline segments which are then rasterized by CoreGraphics using Metal on the GPU.

https://github.com/juce-framework/JUCE/blob/2b16c1b94c90d0db...

And on Windows 7 and up it'll use the hardware-accelerated Direct2D APIs:

https://github.com/juce-framework/JUCE/blob/2b16c1b94c90d0db...

Jasper_

You mentioned using the OpenGL context, and this code is used by the OpenGL context.

CoreGraphics does not use the GPU.

Direct2D uses a approach which tesselates paths into triangles on the CPU. It is similar to the JUCE code in that the GPU is used for coverage, but it still does not natively render splines on the GPU.

gabereiser

The absolutism of some of their statements when we have 30 years of GPU research at our disposal is pretty eye-opening. I get that maybe this stuff didn't exist as a crate for rust but c'mon! Splines and shapes, glyph rendering, I wrote a game engine in C# back in 2007 that did all these things, and more. I like the explanation and the breakdown of SDFs for primitives but they are standing on the shoulders of giants and act like they're on an island.

swatcoder

I agree with your final statement and the seeming lack of research, but are you sure your characterization of JUCE is accurate?

You’d surprise a lot of people if you were right about it using GPU acceleration for its UI framework. It does have an OpenGL container that fits into its view object hierarchy if you want to write your own accelerated component, but the rest of the UI is pretty much standard event-loop -> platform-API dispatch. It’s accelerated where the underlying platform-API is accelerated but not in any kind of explicit or fine-tuned way. The focus has always been more on portability than performance, even on the audio side.

(And of course the high-priority audio processing runs in an entirely segregated context from the UI, so the performance characteristics of the two pieces are decoupled anyway.)

coldtea

>I agree with your final statement and the seeming lack of research, but are you sure your characterization of JUCE is accurate?

That JUCE is "heavily used in gaming" for starters is widly innacurate

fxtentacle

I personally know multiple people using it in their games and game-related tooling and Roli (the company selling JUCE) even has an official Unreal Engine 4 plugin. So to me, it appears to be widely used.

fxtentacle

Just checked and even without any additional setup JUCE is using CoreGraphics which is using Metal under the hood. So yes, the platform-specific renderer is using GPU.

Also, you can use OpenGL for GUI compositing, too, not only as a 3D context.

meindnoch

CoreGraphics doesn't use Metal. You're confusing it with CoreAnimation.

Jasper_

CoreGraphics is a CPU rasterizer. It doesn't use the GPU.

rob74

Maybe you need to add an "(in Rust)" to all these sentences? Sure there are C++ frameworks, but they probably wanted a pure Rust UI framework?

My 2 cents: it's nice to have smooth rendering in your editor, but I'm currently mostly using a Java-based IDE (IDEA family), and it's responsive enough for my taste. If I were to use the current prototype of their editor, I'm afraid the usage of fixed-width fonts all over the place (which I assume can be fixed, but may also be due to constraints of the UI framework?) would probably bother me more than the 120 FPS UI would impress me (assuming that I had a monitor that was fast enough, which I don't). On the plus side, it sounds like they support sub-pixel rendering - if that's the case, kudos to them!

as-cii

Hey rob74! Antonio here, author of the post.

Zed is not constrained to use fixed-width fonts and supports all kinds of fonts, monospaced and variable-spaced alike (ligatures and contextual alternates included). Even though we use a glyph atlas, we rely on CoreText for shaping and rasterization (rendering sub pixel variants as well). In practice, this means that text is indistinguishable from a piece of text rendered by the operating system.

gostsamo

Does your framework have any accessibility features? I'm a screen reader user and having responsive UI is wonderful as far as it is accessible at all.

DeathArrow

Not true, even if you add Rust. egui was released earlier: https://github.com/emilk/egui

And egui isn't tied to Apple proprietary frameworks.

littlestymaar

Afaik egui isn't doing any kind of the fancy GPU based 2D graphics this blog post is about though.

dorkwood

> KiCad uses a technique using RGB for gradients so that the rendered glyphs can be anti-aliased without accidentally rounding sharp corners.

This is known as a “multi-channel signed distance field”, or “msdf”.

https://github.com/Chlumsky/msdfgen

scotty79

Do you know any text editor that uses it for font rendering?

rikarendsmp

I used to use it, but its extremely font and glyph-sensitive wether or not it works. And if it doesn't work there is no easy fix (or ability to recognise it nonvisually)

fxtentacle

You would have to use that if you want smooth antialiased zooming in and out of text.

BTW, the KiCad schematics are pretty text-heavy. You typically write down all the parameters and IDs of all the electrical components.

ohgodplsno

As for text rendering, Slug [0] has existed for much more than ten years, and is pretty much the gold standard for GPU text rendering.

[0] https://sluglibrary.com/

bobajeff

Thanks, I'm reading the paper* linked to on that site explaining the technique used in the library. It's encouraging me more to pursue implementing a 2D render on the GPU. I'm also inspired by a recent talk about gkurve**.

* https://jcgt.org/published/0006/02/02/

** https://m.youtube.com/watch?v=QTybQ-5MlrE

ohgodplsno

Do note that as far as I know, this technique is patent-encumbered, at least in the US.

shultays

ImGui is at least only used for debug rendering, not something that makes it to end user. At least in the small subset of companies I worked at.

flohofwoe

ImGui is mainly used for debug rendering, but if you browse the screenshot threads, there quite a few 'end user applications' among them:

https://github.com/ocornut/imgui/issues/5886

andrewmcwatters

No, it's supposed to be used for this purpose.

> Dear ImGui is designed to enable fast iterations and to empower programmers to create content creation tools and visualization / debug tools (as opposed to UI for the average end-user). It favors simplicity and productivity toward this goal and lacks certain features commonly found in more high-level libraries.

It's literally not designed for end user consumption.

marginalia_nu

ImGui is amazing to work with though. Like holy fuck is it pleasant compared to basically every other UI-development paradigm ever in the history of user interfaces.

pjmlp

Accessiblity, designer,... and it isn't even the first, that is how GUIs used to be developed in 16 bit home computers for games.

It is like static linking, it was also there decades ago.

There are reasons why the world has moved on from those approaches.

jspdown

Yes but that's because it's an immediate mode renderer.

It feels intuitive and that's why it's the de facto standard for building debug tooling. But this intuitiveness comes at a price! No matter how hard you try, this will always be slower and computationally heavy compared to retained mode.

Great for some stuff, terrible choice for some other.

selfmodruntime

You are correct. I've also encountered it sometimes in internal business gui wrappers.

andersa

I wonder if this is because the default theme for it is somewhat ugly, and most developers aren't designers to make it look better. It is perfectly capable of rendering standalone applications, if you want it to...

EngManagerIsMe

Meanwhile, the gaming world is moving to HTML/CSS/JS for game UI in many cases.

TylerE

I know of at least one shippping commercial game - in 2023 - that renders the UI using Adobe Flash.

EngManagerIsMe

That's super common. It was more common historically, these days it's largely gone away. (Scaleform was the technology that used flash for UI dev)

yason

The bottleneck of UI is not the rendering. A measly 60 fps is plenty fast for UI that feels immediate. We had this in the 90's with software rendering, you don't need a GPU for that today.

What causes user interfaces to hick up is that it's too easy to do stuff in the main UI thread. First it doesn't matter but stuff does accumulate, and eventually the UI begins to freeze briefly for example, after you press a button. The user interface gets intermingled with the program logic, and the execution of the program will visibly relay its operations to the user.

It would be very much possible to keep the user interface running in a thread, dedicated to a single CPU on a priority task, updating at vsync rate as soon as there are dirty areas in the window, merely sending UI events to the processing thread, and doing absolutely nothing more. This is closer to how games work: the rendering thread does rendering and there are other, more slow-paced threads running physics, simulation, and game logic at a suitable pace. With games it's obvious because rendering is hard and it needs to be fast so anything else that might slow down rendering must be moved away but UIs shouldn't be any different. An instantly reacting UI feels natural to a human, one that takes its time to act will slow down the brain.

But you don't need a GPU for that.

ben-schaaf

Unfortunately this isn't true anymore when you get to very high resolutions like 8k. Just clearing the framebuffer of an 8k display at 60hz requires ~6GB/s; about 1/10th of the theoretical memory bandwidth of modern desktop processors. Add in compositing for multiple windows, text rendering and none of that likely being multi-threaded it's pretty clear a CPU has no chance of keeping up.

fxtentacle

You are correct, but so is the comment you replied to.

The latency that matters for a text editor is how long it takes for a new keypress to show up. And that's usually a 32x32 pixel area or less.

gizmo

That's not sufficient, though. You want the 0.1% case where you press undo a couple of times and big edit operations get reversed to be smooth. You have to hit your frame target when a lot is happening, the individual key press case is easy.

It's just like a video game. A consistent 60fps is much better than an average frame rate of 120fps that drops to 15fps when the shooting starts and things get blown up. You spend all the time optimizing for the worst case where all your caches get invalidated at the same time.

ben-schaaf

The other latency that matters is how long it takes to draw each frame when scrolling through the code, which is usually most if not all of the framebuffer.

CyberDildonics

You aren't talking about the same thing they are and you aren't talking about a scenario that exists. 8k is still exotic and no one is driving any desktop off of a video buffer going from main memory through a CPU.

ben-schaaf

We've literally got an 8k screen in the office and 4k is becoming increasingly common. Not only is this a scenario that exists it's one I've personally experienced and fixed.

jcelerier

But when you do software rendering you don't need to redraw the entire frame buffer, just the parts that changed, which can be very small even at 8k - e.g. the box of a checkbox being switched

ben-schaaf

Scrolling requires redrawing most if not the whole framebuffer.

kllrnohj

You don't need to redraw the entire framebuffer with GPU rendering, either. It's perfectly possible to do small, partial updates.

See for example https://registry.khronos.org/EGL/extensions/EXT/EGL_EXT_buff...

And https://registry.khronos.org/EGL/extensions/KHR/EGL_KHR_part...

eptcyka

It's significantly cheaper to render a fullscreen window on the GPU than it is on the CPU if you're running at 2560x1600 or 4K. To push that many pixels, you have to send significant amounts of data (framebuffers) at 60 FPS to the GPU anyway - which is no small feat and is bound to eat into the battery. It's just more efficient to run on the GPU.

seanalltogether

Separating the logic thread from the render thread doesn't end up being the silver bullet to performance. If I press the letter 'a' on my keyboard and my IDE suddenly stalls on a bunch of code hinting logic, that new state is still not going to make it to the render thread for a couple frames. As long as the render thread is dependent on state coming in from the logic thread, the stalls still propogate.

You could have some wins around scrolling and any animations that you can commit ahead of time

tracker1

That's a big reason why VS Code pushed the extensions (for code completion logic, etc) to another process. Where VS proper was in-thread... man, some web projects in VS are just painful.

I do think that they will need to do something similar with a higher level language support for extensions. Could probably piggy-back the Deno work for using V8/JS/TS for extensions similar, maybe even compatible with VS Code extensions. It's a massive user space, but seeing the core feature set in another editor could be really nice.

For me the killer feature of VS Code is really the integrated terminal. Not having to switch windows, or leave the app to run a quick command is really big imo.

kllrnohj

> A measly 60 fps is plenty fast for UI that feels immediate. We had this in the 90's with software rendering, you don't need a GPU for that today.

No we didn't. We had nothing close to that in the 90s.

Typing was responsive, but that's not anywhere close to 60fps and a very small region to update to boot (120wpm = 600 keys/minute = 10 keys/s, or 10fps). Scrolling benefits from 60fps+, but in the 90s scrolling jumped by lines at a time because it couldn't do anything better.

You need a GPU to keep up with smooth scrolling at modern resolutions. But this also shouldn't be a surprise. What the article talks about is bog standard stuff for all current UI toolkits. It's what mobile platforms like Android have been doing for a decade now.

TylerE

60 is the new 30.

Was amazed at how good my latest iPhone feels with 120hz scrolling. It's like magic.

MaxikCZ

I thought moving to 144 Hz monitor made sense just for gaming, but now on 60 Hz monitor just moving the mouse on desktop feels as if the PC is struggling

Traubenfuchs

Proper development of multithreaded desktop apps and not blocking the UI thread appears to be a lost art. I remember I was drawing all the threads on a person wide piece of paper on the wall when I was working on my first commercial winforms application. It's not exactly hard, but requires basic understanding for UX, threading and the platforms you are working with.

Nowadays I still regularly see applications with (temporarily) frozen windows and I just don't understand how that's possible. When I was developing my winforms apps, anything that would do more than perfectly predictable UI manipulations was run on a background thread (in a task), would be forbidden from starting twice at the same time and only updated the UI when done.

HideousKojima

In .NET Core/.NET 5+ I think it even defaults to having awaited Tasks run on a different thread. So long as you're using async/await properly it's almost impossible to screw up (no need to worry about SynchronizationContext etc.), yet I still see tons of examples of people who simply don't understand threading, async/await, etc. and screw it all up

Traubenfuchs

As I don't like async/await (read: I probably don't properly understand it) I just added two extension methods to Control that took a lambda for updating UI stuff (one for Invoke, one for BeginInvoke).

Probably kind of like this: https://stackoverflow.com/a/36107907/2306536

I haven't developed C# for almost a decade, so I probably didn't even have access to async/await back then.

zodester

React Native follows this pattern by moving most JS processing off the main thread allowing scrolling and other input to happen without blocking for a response from the JS VM. However this does end up causing a lot of problems with text input and gestures as now you have a sync issue between the threads, if you get caught processing a bunch of stuff in the JS thread the app may appear to be responsive with scrolling but nothing happens in response to button taps or text insertion. It is the only way RN was going to work on lower end hardware though so probably is the right solution if you assume running react everywhere is a good idea.

mike_hearn

Yes, making GUIs responsive isn't as simple as just "don't run stuff on the UI thread". There are good reasons to run stuff there even if you're going to hang up the app for a brief period, namely, the user won't see partial/incorrect updates like non-syntax highlighted text or incorrectly clipped shapes, and - especially important - it means you can't end up with invalid GUI states like the user pressing a button that does X and then immediately pressing another button that does the opposite of X, where you should have disabled the other button but didn't get to it in time. Web apps have this sort of problem if they aren't using enough JS, and it can cause all kinds of weird bugs and errors.

The reality is that moving things off the UI thread is always a calculated engineering decision. It can be worth it, but it complicates the code and should only be done if there's a realistic chance of the user noticing.

DeathArrow

I think the bottleneck comes from updating each UI element instead updating them in batches and updating elements that don't need to be updated.

kaba0

That’s just retained mode GUIs calculating what got damaged and only updating those. That’s how most GUI frameworks work since many decades.

maeln

While I do enjoy a nice and smooth gpu-accelerated ui, I never use a gpu-ui framework for my own project for one simple reason: Almost none of them properly support accessibility. Electron (and in general the web), despite its sluggishness has a very good support for accessibility. Most "traditional" native ui toolkit also do.

That would be my advice to anyone making a gpu-accelerated ui library in 2023: Try to support accessibility, and even better: make it a first class citizen.

nicoburns

I can’t speak for this one as it’s proprietary, but You’ll be pleased to hear that pretty much all the open source Rust GUI toolkits either integrate AccessKit or have concrete plans to do so in the immediate future. There are toolkits that can’t even do basic things like render images but have accessibility support :)

maeln

That is good to hear !

littlestymaar

The Rust GUI ecosystem looks particularly promising in that regard, because there is a foundational accessibility library called Accesskit that's being incorporated in several UI frameworks (egui being the first one to have it already but works is underway do add it in several other places)

bruce343434

If you want to do this, where can you start? What are some patterns for making code that's not too spaghetti when you have to handle tabbing, focus, layout, speech of element contents, the actual hierarchy of the elements etc? Are there standardized OS accessibility API hooks or something?

rikroots

I can't speak for GPU solutions, but I do have some experience of trying to make HTML 2d <canvas> elements as accessible as possible. You can get an overview of the issues/solutions (disclaimer: using my canvas library) here - https://scrawl-v8.rikweb.org.uk/learn/eleventh-lesson/

Gigachad

I’ve heard it’s hard to even work this out as all of the screen reader tools are expensive, proprietary, and there are no standards. The typical way is to just make your program, and if it gets popular, the screen reader companies will find a way to make their product work.

illiarian

> work this out as all of the screen reader tools are expensive, proprietary, and there are no standards.

ARIA is a good start, and screen readers built into the OS are a good start.

Moreover, major OSes have accessibility APIs that screen readers will use:

- MacOS https://developer.apple.com/library/archive/documentation/Ac...

- Windows: https://learn.microsoft.com/en-us/windows/apps/develop/acces...

mwcampbell

This used to be the case on Windows, but hasn't been for at least 10 years, and has never been the case on Mac OS X (using the historical name for clarity) or Linux. On Windows, the open-source NVDA screen reader is widely used. Furthermore, the hacks that Windows screen reader developers historically used to "find a way to make their product work", particularly intercepting GDI API calls (either in user space or in a display driver) to build an off-screen model, are not applicable to modern UI stacks. And the other major screen reader hack, using application-specific COM object models, was mostly only applicable to Microsoft Office and Internet Explorer. So you basically have to implement platform accessibility APIs to make your UI accessible. (If you use native controls, that's more or less done for you.)

Edit: BTW, I've been in the assistive technology industry a while, particularly on Windows. Feel free to ask me anything.

lukastyrychtr

That's partially true, but fortunately not completely. There are widely use open-source screen readers for Windows and, of course, there's no proprietary screen reader on Linux. And, definitely, there are standard APIs which are used to communicate the accessibility tree between an app and a screen reader. Yes, they are specific for each platform, and Windows has multiple of these, but they are standardized at least for each platform.

pflanze

Here are recent suggestions for Windows and Linux by a blind person: https://news.ycombinator.com/item?id=35008647

wnkrshm

Since browsers can interface, I would guess there are hooks but I would also guess that they are not standardized (between platforms).

msvan

Lots of negativity in here. I for one am excited about the prospect of an editor that is as responsive as I remember Sublime being back in the day, with the feature set I've come to expect from VS Code. An editor like this simply does not exist today, and betting on the Rust ecosystem is entirely the right choice for building something like this in 2023.

jakswa

Here here. I backed Onivim hoping it was going to shine a light in the darkness, and it seemed promising, but ultimately was abandoned? I think, unsure.

xlii

That's exactly the rabbit hole I'm in.

I love immediate feedback but getting it ranges from hard to neigh impossible. E.g. I have a complex Emacs setup for rendering Pikchr diagrams, but there are a lot of problems to solve from diagram conception to the end result, so I thought, hey, why not make my own cool RT editor - in Rust obviously.

Unfortunately I learned that GUIs are though problem especially if idea is hobby-based so there's only one developer inside. Ultra responsive GUIs cool, I have a prototype in egui (not sure if that's as fast as Zed's premise but feels fast nonetheless) and yet it doesn't support multiple windows, which I wanted to have.

120 FPS with direct rendering sounds AWESOME just for sake of it, but I believe that for the end-user layout will be more important than refresh rate, and that's different beast to tame.

Personally I "almost" settled for Dioxus (shameless plug: [1], there's link to YT video) and I'm quite happy with it. Having editor in WebView feels really quirky though (e.g. no textareas, I'm intercepting key events and rendering in div glyph-by-glyph directly).

[1]: https://github.com/exlee/dioxus-editor

as-cii

Hey xlii! This is Antonio, author of the post.

You're right that rendering is only part of the story. To stay within the ~8ms frame budget, however, every little bit counts. Maintaining application state, layout, painting, and finally pushing pixels to screen, all need to be as performant as they can be.

For layout specifically we're using an approach inspired by Flutter, which lets us avoid complex algorithms but still have a lot of flexibility in the way elements can be positioned and produce a rich graphical experience.

Thanks for reading and commenting!

xlii

I don't have experience with Flutter, but based on quick glance they're using widgeting and, what I found quite important - ability to develop GUI outside of the application. Something that I think libraries like egui are missing (and which is easily obtainable with Tauri/Dioxus).

Rebuilding whole app to ensure that some box doesn't get cut off ruins development experience, especially for big apps.

Kudos to you guys, I hope you'll make Zed extensible, so that instead of writing my own editor I can use yours ;-)

Animats

This seems like the wrong portion of the problem on which to spend time. This is a text editor. Performance problems with text editors tend to involve long files and multiple tabs. Refresh speed isn't the problem, although keyboard response speed can be.

I'd like to see "gedit", for Linux, fixed. It can stall on large files, and, in long edit sessions, will sometimes mess up the file name in the tab. Or "notepad++" for Linux.

nottorp

I don't understand. Why would you need to render a user interface constantly at 120 fps, instead of just updating it when something changes? Laptop batteries last too long these days? Electricity too cheap?

as-cii

Hey nottorp. Antonio here, author of the post.

Zed and GPUI use energy very judiciously and only perform updates when needed. The idea is that we can render the whole application within ~8ms, and that shows everywhere: from typing a letter to scrolling up and down in a buffer. However, if the editor is sitting there idle, we won't waste precious CPU cycles.

Thanks for the feedback!

rubymamis

Will you allow developers access to your GUI framework? What about open-sourcing it?

nottorp

Yeah, might want to edit the title a bit. Or not, considering these concepts are getting lost.

I mean, the win16 api from ages ago had support for invalidating specific screen regions etc. It probably got lost somewhere in the transition to javascript...

flohofwoe

It's not about rendering static screens at 120Hz, but rendering anything that's animated at a smooth 120Hz.

undefined

[deleted]

sdflhasjd

"Because it looks good" is probably the most popular reason.

nottorp

But if nothing changes it looks as good at zero fps :)

Edit: Yay, it's a text editor. What happened to only redrawing the line that's being edited and the status indicators?

DeathArrow

What's wrong with using platform APIs? I think that by 2023 most UI toolkits provided by the OS are hardware accelerated.

mariusmg

If their plan is to make their app crossplatform (Windows,OSX,Linux) and be very versatile with the UI customization , then maybe writing a small/focused specifically for your needs UI crossplatform toolkit is not such a bad idea (after all this is what Sublime Text is doing as well)

But the HW accelerated brag is pointeless. Even if they manage to squeeze some extra performance over native toolkits, that is not necessarely going to matter in the grand scheme of things. Drawing the UI is never the bottlneck in a text editor...

doodlesdev

   > Drawing the UI is never the bottlneck in a text editor... 
Unless you're using a webview, which is... unfortunately the case for some of the popular code editors available. Sad times we live in.

samsaga2

Smooth animations and performance. Draw a big image with Win32 BitBlt it's painful slow, for example. Imagine that you are zooming an image in Photoshop and it is laggy, the user experience would be horrible. Also, the lag is an important issue in the user interface, even something so small like 100ms would be bad.

pjmlp

If you're using BitBlt instead of Direct2D in anything post Vista, you're holding it wrong.

speed_spread

Why not just use DirectX/*GL for those regions that need it and stick with platform UI for the rest? Blitting API still works just fine if you're drawing combo boxes, no?

DeathArrow

You can use ID2D1HwndRenderTarget::DrawBitmap or ID2D1RenderTarget::DrawBitmap instead.

tcfhgj

What about the winrt api?

pjmlp

Nee, Direct2D. Save yourself some pain.

WinRT has gone through multiple reboots, who knows what will happen still.

Better use the existing Win32/COM stuff.

nicoburns

It’s hard to make a cross-platform UI that way.

DeathArrow

But this UI is not cross platform either, as it is still using proprietary APIs.

pjmlp

That is what wrappers and platform plugins are for, no need to build a full blown API from scratch.

pornel

Such solutions are often in tension between using only the lowest common denominator, and the code having different implementation for each platform anyway.

For example, their editor has tabs for editor buffers. Cocoa has a static tabbed widget, which has wrong look and odd UX for this. Cocoa also has a tabbed window type, which isn't a widget you can control. I imagine it'd be hard to abstract that away to work consistently with how Windows does tabbed views. I also haven't seen Windows' tabs being draggable, so that would probably need special DIY solution for Windows which Cocoa tabbed windows don't need.

Anyway, I think native UI toolkits are dying. For most people the Web is their most familiar "toolkit" now, and native platforms instead of fighting that back with clear consistent design, went for flat design and multiple half-assed redesigns that messed up all remaining expectations of how "native" looks and feels.

almostdigital

Looking forward to trying this, VSCode is great but I really miss the performance of Sublime Text. I hope they get the plugin system right, killer feature would be if it could load VSCode plugins (incredibly hard to pull off, yes)

as-cii

Thanks, almostdigital!

After our past experience with Atom, getting the plugin system right is a top priority for the editor.

The thought of cross compatibility with VSCode plugins definitely crossed our mind and it's not out of the question, although our current plan is to initially support plugins using WASM.

fassssst

Erm, native WinUI apps are GPU accelerated and render at vsync.

doodlesdev

Also GTK4 is GPU accelerated whenever possible, with really well mantained Rust bindings for it I think the only thing missing would be macOS which I'm not sure what solutions are there.

Another option in this front could be Flutter if write-once run-everywhere is a need for the project. Another advantage is that it's not only GPU accelerated but it's also retained mode.

pjc50

Nobody loves native WinUI, not even Microsoft.

Which is kind of a shame. But it's the result of years of product management neglect as well as the pull away from desktop UIs to web UIs.

pjmlp

They only have themselves to blame, after the rewrites their forced their hardest advocates to go through, each one with worse tooling, dropping the UI designer, .NET Native and C++/CX along the way.

Native AOT still can't compile WinUI, while C++/WinRT is like doing ATL in 2000, while bug issues grow exponentially.

Only WinDev themselves, and WinUI MVPs, can still believe this is going somewhere.

The rest of us have moved on.

kridsdale1

So is everything written in Apple native UI frameworks since 2009.

kllrnohj

And Android and QT and GTK and Chrome and Firefox and etc...

The article is talking about the same generic hybrid GPU-accelerated rendering architecture that everything uses. Seemingly the only "new" part is "in Rust!"

tayistay

My rui library can render UIs at 120fps, uses similar SDF techniques (though uses a single shader for all rendering): https://github.com/audulus/rui

Is their GPUI library open source?

inamberclad

Sadly I didn't see any links.

monkeydust

What's the real world client experience of developing UIs to render @ 120FPS - is it like once you have tried it going back is really hard?

Daily Digest email

Get the top HN stories in your inbox every day.

Leveraging Rust and the GPU to render user interfaces at 120 FPS - Hacker News