Can we compile it to WebAssembly and benchmark...?
It won't be the performance boost you will be expecting.
Three.js does a lot more though than just being a WebGL API wrapper (the WASM => JS calling overhead is negligable these days compared to the time spent inside WebGL).
(you're right of course that most of the time there won't be much of a difference, but not because of the WASM => JS marshalling but because of the time spent inside WebGL calls, which can be shockingly high).
PS: Yeah, looking at he C++ headers, it's shared_ptr<> all the way down :( This might actually add more refcounting overhead when natively compiled than what typical Three.js code spends in JS garbage collection.
I wonder it too.
I really like three.js, I use it, and I like that people are doing good things like this with it.
I wonder if someone was building a 3D library for the web today if they'd do things the same way as three.js does things. Under the bonnet three.js is effectively a bunch of string concatenations that build up GLSL shaders for things like materials. It works well, but if you look through the code it really is a massive series of if statements (and loops? :) ) like "if a lambert material else if phong else if standard else if physical, and if lights > 0, and if env maps.." Hundreds of them, all smooshed together. The external API that you use to access everything is quite clear and easy, but doing things extending a material is much harder than it needs to be (you essentially use a regex to modify the big concatenated GLSL shader string..).
I can understand wanting to use the API in another language, but I'm less sure about wanting to use the internals.
We are slowly moving away from that approach.
Have a look at the NodeMaterial API that it's being designed right now:
Hmmm, not working on my latest iphone mobile safari. All other examples are working…
Try with the dev version:
I've added custom materials building upon THREE's shader code, and found this part more organized than you're describing, though undocumented. They're using a custom shader preprocessor, and shaders use it to include different "chunks" based the type of material, whether it has lighting/shadows enabled, etc. Some parts are a bit messy, but overall a decent compromise to generate efficient shader code for each material (and I don't think it counts as an "uber shader").
Note this is all without using the new NodeMaterial, which I didn't even realize existed due to the also missing documentation.
This is a reasonably common technique in game engines, actually, usually referred to as an “uber shader”. The idea is to have one massive shader with a bunch of if’s that does everything, and then you don’t have to pay the cost of shader state changes.
There are lots of arguments for/against the uber shader technique, but I’m guessing it was probably intentional for performance reasons, not just an accident of development.
In regards to WebGL APIs I rather use BabylonJS or PlayCanvas for the tooling.
Could you elaborate? I've done a fair bit of Threejs and would love to hear about better tools/libraries.
PlayCanvas is like using an Unity like tooling that was developed for WebGL as only target.
Microsoft has a YouTube channel with BabylonJS tutorials.
See also: Gthree is a GObject/Gtk+ port of three.js
Upvoted bc Three.js is amazing. Hopefully this project is amazing, too.
Another one :)
These come along periodically and I am all for them, but what is tricky is that ThreeJS carries on evolving and most die in the end.
Without some sort of ThreeJS test set it's not possible to for these implementations to keep up.
Nice for porting purposes, other than that it is constrained to a GL ES 3.0 subset given what is available in WebGL 2.0.
This is great. The C++ ecosystem is becoming truly awesome.