Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Deno 1.39: The Return of WebGPU (deno.com)
192 points by oritsnile on Dec 14, 2023 | hide | past | favorite | 60 comments


Now all that's left to do is to integrate a minimal native system glue library into Deno (create a window with a WebGPU canvas, receive input events, audio buffer streaming) and it would be a nice little platform to build and distribute small games and graphics demos written in WebAssembly and/or Typescript :)


> integrate a minimal native system glue library into Deno

have you tried denog (fork of deno which uses winit for Windowing and also has WebGPU support)? https://github.com/denogdev/denog

As for deno proper, according to this thread, it looks like one of the Deno maintainers has been working on Windowing support, also using winit (same as denog linked above) - https://github.com/gfx-rs/wgpu/pull/3265#issuecomment-140065...


I'm curious why they needed to fork Deno to achieve this. There wasn't a way to achieve this as like a library/3rd party package/wrapper or something?

The fork is already ~1700 commits out of date with the original. How often would they sync, etc.?


Maybe it was a bit simpler to pull in the Rust crates and add bindings. I've read that modifying and extending Deno is relatively easy to do in this way?


Upfront simpler short term solution, 100%. Impressive too I might add. I love stuff like this. Turning "scripting language JavaScript/TypeScript" into cross-platform usable GUI that can do like <canvas> and 3D rendering if I understand correctly? So cool.

I just "worry" (in an unsolicited advice fashion might I add) is there a better alternative to maintain this longterm


That's fantastic!


I think the ecosystem is inching in this direction. There are already some interesting PoCs out there using Tauri (Rust based) and the deno_core Rust crate.


Do you have links to some of the more non-trivial ones?

Offtopic: Does anybody know if the WebAssembly runtime inside deno (which is actually the runtime inside v8) is exposed by any of the deno crates? A cursory search seems to indicate that it isn't. You can access it from `deno_core` through javascript, of course.


The initial WebGPU Standard Library is such a promising step in this direction!


excellent idea!

and use `deno compile` to ship it as a self contained binary


Why wouldn't you just make a game that runs in a browser at that point?


Same reason why Electron exists, but in a much smaller bundle (since most of the actual browser engine wouldn't be included).

This would also be an opportunity to build better cross-platform APIs than what's available in browsers (e.g. WebAudio, fullscreen, pointer lock, text input (input in general), networking, ... are all a royal PITA on the web - basically any web API that isn't WebGL or WebGPU is mostly broken or half-assed).


That is what middleware is for.

So lets call it a game engine using JavaScript, instead of the usual ones.


If you want to call something simple like winit, GLFW or SDL a 'middleware' then yes. It definitely would not be a game engine in the popular sense like UE or Unity, those are way to bloated for most use cases such a minimal cross-platform wrapper would be good for. It would be something you could build a game engine on top though.


It's more a cross platform UI tooklkit built around web standards.


It would just be a nice way to turn your standalone WASM game into an .exe (or whatever equivalent) without needing too much extra stuff, like if you wanted to distribute a version on Steam, or offer multiple downloads like itch.io does.

It's nothing groundbreaking. It would just be a nice and convenient, is all.


And call it "Electron"?


It would be a much smaller and more efficient product than Electron because it wouldn't include a bundled web browser.


How would it be any different from Electron if it bundled a JS/Webassembly engine (V8, which it already does) and a HTML canvas renderer (which would most likely be Blink)?


It would not bundle a HTML canvas renderer, only WebGPU rendering to a native window's client area.

HTML+CSS would be left out entirely.


That’s still only part of what you need. You could pretend the native window client area was a <canvas> and give the code a GPUCanvasContext that renders to it, but you still need to do something about handling user input, what happens when the window size changes, &c. &c. And if you’re working with such minimal primitives, you won’t be able to make something that looks or feels even vaguely native.

The browser gives you a lot of valuable functionality.


This is all trivial (in fact much, much easier) to do with a handful of tiny custom APIs instead of what HTML5 provides (there are plenty of native window system glue library examples which have much saner APIs for window and input handling: GLFW, a subset of SDL, winit...).

Rendering an UI that even remotely looks "native" is harder with HTML+CSS than a handful of straightforward rendering primitives which can be implemented on top of WebGPU.

High quality text rendering is a bit non-trivial of course, but that's also only an issue if you want to do a native-looking UI (but mostly irrelevant for games).


Unless the Deno folks want to write that renderer from scratch, the only usable implementation right now is...part of Chromium.


The browser WebGPU implementations are available as native standalone libraries and are fairly easy (YMMV, especially for Dawn) to integrate into regular native applications:

Chrome's implementation (C++): https://dawn.googlesource.com/dawn

Firefox's implementation (Rust): https://github.com/gfx-rs/wgpu/


No it's not. WGPU (a WebGPU renderer built with rust) which is what Deno is using under the hood in this change, works perfectly well and is used by many other projects. Even if that were not the case Chrome's WebGPU implementation is also available separately and it's called dawn.


This whole thread is confusing. The blog in the OP, the comments section of which are you posting in right now, is literally about adding WebGPU support to a headless JavaScript interpreter Deno, that includes no HTML/CSS/DOM/Windowing engine. WebGPU is a standardized API built on platform specific graphics APIs, it is not a "renderer". The WebGPU support is based on wgpu, which isn't used in Chromium; it's actually the basis of Firefox's WebGPU implementation (and widely used by most Rust WebGPU projects, such as Bevy.) The person you are replying to simply wants native windowing APIs added to Deno as well, which has been done about 1,000,000 times in various other languages.

I'm very confused because every comment you have posted in this reply chain makes no sense in the context of the actual blog.


the rest of the desktop software as we know as the browser (the UI, a config system, security features, history, profile, devtools...)


This is exciting

> The WebGPU API gives developers a low level, high performance, cross architecture way to program GPU hardware from JavaScript. It is the effective successor to WebGL on the Web. The spec has been finalized and Chrome has already shipped the API. Support is underway in Firefox and Safari.

Interestingly, the Deno team previously attempted[0] this and rolled it back due to instability. Once this is stable, it means that ML/AI workloads will be accessible to JS developers:

> These days, most neural networks are defined in Python with the computation offloaded to GPUs. We believe JavaScript, instead of Python, could act as an ideal language for expressing mathematical ideas if the proper infrastructure existed. Providing WebGPU support out-of-the-box in Deno is a step in this direction. Our goal is to run Tensorflow.js on Deno, with GPU acceleration. We expect this to be achieved in the coming weeks or months.

[0] https://deno.com/blog/v1.8#experimental-support-for-the-webg...


> We believe JavaScript, instead of Python, could act as an ideal language for expressing mathematical ideas if the proper infrastructure existed

Outside of "everyone uses js", why do we believe this? What makes JS "ideal".

I would think for machine learning one would want:

- generally default to non-mutating, functional representation of ideas, with convenient escape hatches

- treats the GPU as a concurrent (or better yet, distributed) resource

- can seamlessly run ML code as either immediate or an optimized graph.

- could swap out compute resources as asynchronous rpc, like, flex out to something more powerful if available and needed

Certainly most of these are possible in JS, but I would hardly call it "ideal" for these bullet points, not to mention other general concerns like a highly questionable dependency management ecosystem.


JS lives where you visualize and explore data, so making it better at processing data would be a big win for workflows.

When you use a jupyter notebook the kernel wastes a lot of resources on visualization and the support for interactive visualization isn't ideal (copy all your data to json, pass that to a JS library which has to do the work).

Take a look at ObservableHQ for an idea of what an interface might look like, then imagine numpy semantics and webGPU on top.

As a bonus: run on server or in browser locally would reduce cloud bills because you don't need a whole server to do an exploratory analysis.


There is a TON of sunlight between "better than python" and "ideal"


I'd love to use JS instead of Python for math/numerical stuff for many reasons, but without operator overloading it's just hell.


Interestingly I dislike Python for the exact same reason. Every time I look at a PR, I second guess every single operator and index. Does this attribute access carry a heavy penalty or side effects? Etc. I find it adds a large mental overhead to any given piece of code.


Side effects on operator overload or attribute read are just really bad practice and is very rarely done. Do you second guess every identifier in C because it could be a preprocessor macro?

Yeah, stupid overloading like C++ IO is a mess, but I haven't encountered much such abuse in Python.


Fair points. I do find the scope of C macros to be far narrower than overloading, but you are right.


The usage of the word "ideal" in the 2021 post was probably not ideal :)

The accessibility and diverse ecosystem of JavaScript, owing to its status as the lingua franca of software development, would probably be the basis for it being "ideal". I think it would be hard to make an objective claim that one programming language is strictly better than another as a programming language for mathematical ideas.


> lingua franca of software development

s/software/web/


probably just rushed writing. the Deno team are smart, I imagine they know there are better languages than JS for these sort of problems (technically)

giving them the benefit of the doubt, perhaps they mean is that it's ideal having these capabilities accessible in JS since it can be run (almost) everywhere and there are a ton of developers who know JS


Just yesterday I was looking to play around with webGPU and found the 1.8 release page. I had no idea it was removed. I hope that webGPU will gain adaptation as the best cross platform system. The incompatibility between platforms for openGL or Vulkan always made me not want to learn it.


Small correction: It wasn't rolled back due to instability, it was because it increased deno startup time even for programs that weren't making use of WebGPU apis. That is no longer the case this time around.

As someone who detests python, I really hope we can get some good TS WebGPU libs going. In fact any ML lib using WebGPU under the hood would be quite portable to other languages and platforms.


See also Web Neural Network API: https://www.w3.org/TR/webnn/


JavaScript is useless for data exploration, that's why very little data scientist uses JavaScript and all kinds of none white spaces languages. Curly braces doesn't work well in iterative explorative programming


"Languages with curly braces are categorically useless for data exploration" is certainly ... a take.


I don’t think even the creator of Python would agree with this..


i've built an entire career building apps and tools in JS for data scientists to explore complex data. tons of companies pay lots and lots of money for this - i'm currently at a FAANG doing exactly this with data scientists.


Did you forget about R?


yet it doesn't even have a comparable market share ( even tho whole ecosystem is dedicated for datascience)


don't nobody tell him about this double bracket [[]] language called mathematica!


I've been wondering for a long time when we might expect to see a stable WebGPU API in all major browsers (mostly concerned with my daily browser, Firefox), so I've been looking for an official message on the topic. Deno claims the spec is ready

> The [WebGPU] spec has been finalized

but the official WebGPU spec [1] still describes it as a draft. Have I misinterpreted something here or is there some missing context around Deno's statement?

[1]: https://www.w3.org/TR/webgpu/


In the W3C process, in order for a TR to upgrade from a Draft to a finished specification, it needs two shipping implementations. Firefox and Safari are both working on theirs. We hope the only changes involved in this process will be minor and/or editorial. Once a second implementation ships, it will move out of draft.


This might be a better page to get an idea what's the status in different browsers:

https://github.com/gpuweb/gpuweb/wiki/Implementation-Status


If you're using any Chromium based browser it's on your machine right now.


The question is, is it still going to change?


Minor things probably yes (for instance check the "What's new in WebGPU (Chrome xxx)" articles here: https://developer.chrome.com/docs/web-platform/webgpu), breaking changes probably not.


There seem to be no current 1.0 discussions yet on the GitHub prpject, seems possible it's a mistake at the Deno end.


I'm still so bullish on Deno. They iterate quickly, but they also have a clear vision and direction. I've almost completely stopped using Node on personal projects; Deno is the new world I want for non-browser JavaScript.

Recent example: I've got a Deno server running a production service. I needed to make some bulk changes in the DB, and I was able to import DB types and utils directly from the main server project and quickly write a TypeScript script that used them to make the changes I needed, with the new script being the entrypoint, no config changes made, and the rest of the project being ignored. This would be practically impossible in a Node.js project that used any TypeScript, ES modules, etc.


I gotta admit at the outset I was more than a little sceptical of WebGPU for ML in general but now I have much greater respect for the usefulness of local inference as part of a webapp and the increasing ability to actually run a quantized models in the browser. Glad I was wrong !


Why were you skeptical of it originally? I wrote about this coming to pass over a year ago now: https://fleetwood.dev/posts/a-case-for-client-side-machine-l...


> WebGPU is still considered unstable in Deno. To access it in Deno use the --unstable-webgpu flag


That's a formality. Once major browsers ship WebGPU without flags, the spec itself can actually be codified as a standard rather than draft. And I'm sure Deno will follow suit in the version after that event.

Based on current progress in browsers, I'd guess at least 2 will have it enabled without flags by end of Q1 2024. Chrome sort of already does, but for select platforms only.

I highly doubt the API will change from now till then.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: