Brian Lovin
/
Hacker News

Show HN: Apple's SHARP running in the browser via ONNX runtime web

github.com

Hi HN, author here. SHARP is Apple's recent single-image 3D Gaussian splatting model (https://arxiv.org/abs/2512.10685). Their reference code is PyTorch + a pretty heavy pipeline; I wanted to see if it could run in a browser with no server hop, so I exported the predictor to ONNX and ran it via onnxruntime-web with the WebGPU EP.

What works: drop in an image, get a .ply you can download or preview live, all on your machine — your image never leaves the tab. The model is large (~2.4 GB sidecar) so first load is slow on a cold cache, but inference itself is a few seconds on a recent Mac.

Caveats: SHARP's released weights are research-use only (Apple's model license, not the code's). I host the exported ONNX on R2 so thedemo "just works", but you can also export your own from the upstream Apple repo and upload locally.

Happy to talk about it in the comments :)

Daily Digest email

Get the top HN stories in your inbox every day.

exabrial

A *2.4gb* ONNX? That is wild. This format continues to impress me. ONNX uses 32bit single precision floats I believe, so thats something like ~644m float params/constants. I recently dove deep 'traditional ML' side of the ONNX serialization format for the purposes of writing an JVM ML compiler for trees and regressions. ONNX actually quite clever the way it serializes trees into parallel arrays (which is then serialized using protobuf). My trees have capped out at < 32mb. I haven't dove into the neural net side of things yet, mainly because I don't have any models to run in prod.(https://github.com/exabrial/petrify if anyone is interested.)

vunderba

Same, I really like the ONNX format. I only wish that they weren't so frustratingly difficult to use on Apple iOS. Their browser engine, WebKit, has become annoyingly restrictive over the years in terms of the working memory cap.

I ran into quite a few out-of-memory iOS safari issues when I was building continuous voice recognition for my blind chess game, so people could play while on the go.

bring-shrubbery

Interesting, what use cases are you using onnx for btw?

vunderba

So I use a VAD onnx (Silero [1]) to automatically detect when someone is talking, and then it sends the audio into one of the voice recognition libraries.

I originally tried to get away with just Whisper Tiny in the chess game [2], but it performs worse on the kinds of short phrases (knight E4, c takes d5, etc) used to dictate chess notation. Even with hotword-based phrasing and corrections, I found its accuracy on brief inputs noticeably poorer. So I switched over to Sherpa [3] trained on gigaspeech. It’s significantly more accurate, but it also comes with a correspondingly larger memory footprint.

Ideally, I would have used just one engine, but I needed a fallback for iOS devices (especially older ones) which can easily OOM.

[1] - https://github.com/snakers4/silero-vad

[2] - https://shahkur.specr.net

[3] - https://github.com/k2-fsa/sherpa-onnx

ollin

Most ONNX files are fp32, but the ONNX format actually allows fp16, int8, etc. as well (see onnx.proto for the full list of dtypes [1] - they even have fp8/fp4 these days!). I ended up switching over to fp16 ONNX models for my own web-based inference project since the quality is ~identical and page loads get 2x faster.

[1] https://github.com/onnx/onnx/blob/main/onnx/onnx.proto#L605

exabrial

Thanks for the pointer actually. I need to take a look at this version of the spec.

bring-shrubbery

Yeah it's pretty cool what a 2gb NN can do from a single image

andybak

I vibecoded a simple web app using Sharp that allowed be to quickly browse any local image folder and view them as "almost" volumetric 3d scenes in a VR headset.

I precomputed and cached each one so it was nearly instant. The effect - although only a crude wrapper around what Sharp already does - was quite transformative and mesmerising. Just the ease of pointing it at any folder of photos and viewing them fully spatially.

It was a bit of a mess code-wise and kinda specific to my local setup - but I should really clean it up deploy it somewhere for other people to try. Although I keep assuming someone else will do it before me and make a better job of it.

bring-shrubbery

Nice, would love to see it, feel free to link it here <3

SpyCoder77

I would love to try that out, if you ever make it let me know.

andybak

My email is in my profile - ping me and I'll be much more likely to remember to do it.

kodablah

Nice, I've also been doing some similarly neat things via ONNX web at https://intabai.dev (caution, just PoC tools atm, only Chrome tested, only some mobile devices work, no filters).

I think all-client-side in-browser AI imagery is becoming very doable and has lots of privacy benefits. However ONNX web leaves a lot to be desired (I had to proto patch many pytorch conversions because things like Conv3D ops had webgpu issues IIRC). I have yet to try Apache TVM webgpu approaches or any others, but I feel if the webgpu space were more invested in, running these models would be even more feasible.

bring-shrubbery

Interesting. Yeah in-browser is not the best, but getting much easier over time!

amelius

I don't like that it uses only a single photo. This means it is going to make up a lot of stuff. E.g. if I show it a photo of a poster, then it will make that poster 3D. With only two photos that problem would already be solved.

bring-shrubbery

Yeah I completely agree, but I think this model solves a different problem. AFAIK it's specifically there for the case where you only have one photo, but still need a 3D gaussian splat scene.

andybak

I haven't tried that specific case but - are you sure? It does get a lot of stuff right from context. I think it would probably depend how much of the frame, the poster took up.

deanva

More reference images from different angles is always going to give more accurate information in 3D. From a single 2D image there is a lot of ambiguity in the context. Several different shapes in 3D can be represented in identical ways in 2D. Additional context like lighting shadows etc helps. But more real signal from more images will always be better

andybak

I'm not saying it wouldn't be - because that's obvious.

amelius

Maybe, but what is wrong with wanting real depth instead of "made up depth"? One extra photo mostly solves that.

andybak

1. There's many use cases where only a single photo is available

2. There are many models similar to Sharp that do accept multiple photos - but Sharp is trying to solve a specific problem. If you have multiple photos - don't use Sharp.

javier2

Did not work in Firefox on Linux, but it runs on Chrome.

Have to admit, I dont get it. I tried it with 3 landscape photos I have and the results were nowhere close to the results in the demo, but that just speaks to the model.

Regardless, its very cool as a browser tech showcase.

bring-shrubbery

Thanks for trying it out! How much ram do you have? Pretty sure it's the only issue that can occur. The quality varies depending on the image too, so it might have been unlucky photos :(

parentheses

I've been poking at running LLMs in the browser. It feels like we're definitely close (<1 year) to seeing real use cases there.

Ubiquity and coverage of devices is what will take longest. Largely dependent on how well we can shrink models with similar performance and how much we can accelerate mobile devices. This feels like it's but further (<3 years?)

david_mchale

I can't wait to get something like this small enough to fit into a browser extension. I already use ONNX for zoom + enhance in Ultra Zoom, zooming from 2d to 3d would be crazy.

jeroenhd

What are the requirements for running this? Chrome throws a whole bunch of "out of memory" errors into the console when I try to execute these. I'm guessing 4GiB of VRAM is not enough?

bring-shrubbery

Ahh, yeah I forgot to mention it. The model is 2.5gb so I assume you'd need at least free 3gb with all the surroudning stuff, with the rest of your system using more ram I'd guess 4gb is way too low - maybe even 8gb would be in some occasions.

I personally tested it on 32gb Apple M2, and it's able to run much heavier stuff.

jeroenhd

My laptop's GPU has 4GiB of VRAM but it still failed to allocate enough memory. Seems like I'll have to pass on demos like these until someone figures out a way to use less (V)RAM to accomplish these kinds of things.

vessenes

This is cool. For practitioners, What’s the current state of the art for free form multi picture to splat? The last time I looked at it the pipeline was pretty janky and included a few separate steps.

bring-shrubbery

For multi-photo, the go-to is still the original 3D Gaussian Splatting (Kerbl et al., 2023) — most consumer tools like Polycam, Luma, and Postshot wrap that under the hood.

herpdyderp

Loading the model crashes my browser tab from memory usage :/

bring-shrubbery

Yeah, I think you need at least 8gb ram unfortunately, but I tested it only on a 32gb M2, so 8gb might also not be enough.

I might create a compressed version of the model, that would work on low-ram machines.

kodablah

I've worked around lower RAM machines with ONNX web models by first separating .onnx from .onnx_data, and second having scripts that split up the "layers" and shards the run (e.g. https://huggingface.co/cretz/Z-Image-Turbo-ONNX-sharded). Then you can have the runtime only run one at a time. I don't understand the details too deep, but Claude is good at writing scripts to shard onnx protos.

anentropic

It froze up my computer, had to hard-boot lol

(16MB M1 Macbook, Chrome)

andruby

Are there any examples one could view before downloading?

bring-shrubbery

There's nothing to download - you can run everything from your browser, and the photo you upload is not uploaded anywhere, it stays in your browser.

utopiah

The point is what does a typical result look like, especially before loading a 2GB model or making a browser crash.

Model results https://apple.github.io/ml-sharp/

sroussey

Does it use both WASM and WebGPU?

Daily Digest email

Get the top HN stories in your inbox every day.