Hacker Newsnew | past | comments | ask | show | jobs | submit | turbolent's commentslogin

... in browsers. Which at best JIT compile. There are several WASM runtimes that AOT compile and have significantly better performance (e.g. ~5-10% slower).

The title is highly misleading.


It’s not misleading to measure the performance of WebAssembly in a web browser.


Yeah, but it's specifically testing things that implement against a posix API (because generally that's what "native" apis do (omiting libc and other os specific foundation libraries that are pulled in at runtime or otherwise) I would suspect that if the applications that linked against some wasi like runtime it might be a better metric (native wasi as a lib/vs a was runtime that also links) mind you that still wouldn't help the browser runtime... But would be a better metric for wasm (to native) performance comaparison.

But as already mentioned we have gone through this all before. Maybe we'll see wasm bytecodes pushed through silicon like we did the Jvm... Although perhaps this time it might stick or move up into server hardware (which might have happened, but I only recall embedded devices supporting hardware level Jvm bytecodes).

In short the web browser bit is omitted from the title.


WebAssembly is neither web nor assembly. It’s a low level byte code format most similar to LLVM IR.


Just means the browsers can catch up.

Initially slower but then faster after full compilation


Browsers have been doing (sometimes tiered) AOT compilation since wasm inception.


could you please name them?


WAMR (WebAssembly Micro Runtime), wasm2c in WABT (WebAssembly Binary Toolkit), Wasmtime.


thank you very much!



Very few languages have "Some Language -> C" or "Some Language -> non-common OS / arch combo". The "just" part is a whole new backend, which is a massive amount of work for common languages.

But it turns out many languages do have "Some Language -> WASM" now. WebAssembly brings portability to the table.


Exactly!


You do not have to deal with the generated C, simply consider it the IR.

The main benefit of generating C over LLVM IR is portability: C is supported by far more systems than LLVM can target.

For example, it enables porting Rust applications to Mac OS 9 (https://twitter.com/turbolent/status/1617231570573873152), or porting Python to all sorts of operating systems and CPUs (https://twitter.com/turbolent/status/1621992945745547264).

The main "goal" of w2c2 so far has been allowing to port applications and libraries to as many systems as possible. For more information, see the README of w2c2.


You do have to deal with the generated C though. Unless you just generate it and throw it away?


Just to clarify: Compared to wasm2c, w2c2 does not (yet) have sandboxing capabilities, so assumes the translated WebAssembly module is trustworthy. The main "goal" of w2c2 so far has been allowing to port applications and libraries to as many systems as possible.


https://gvisor.dev/docs/architecture_guide/platforms/ :

> gVisor requires a platform to implement interception of syscalls, basic context switching, and memory mapping functionality. Internally, gVisor uses an abstraction sensibly called Platform.

Chrome sandbox: https://chromium.googlesource.com/chromium/src/+/refs/heads/...

Firefox sandbox: https://wiki.mozilla.org/Security/Sandbox

Chromium sandbox types summary: https://github.com/chromium/chromium/blob/main/docs/linux/sa...

Minijail: https://github.com/google/minijail :

> Minijail is a sandboxing and containment tool used in ChromeOS and Android. It provides an executable that can be used to launch and sandbox other programs, and a library that can be used by code to sandbox itself.

Chrome vulnerability reward amounts: https://bughunters.google.com/about/rules/5745167867576320/c...

Systemd has SystemCallFilter= to limit processes to certain syscall: https://news.ycombinator.com/item?id=36693366

Nerdctl: https://github.com/containerd/nerdctl

Nerdctl, podman, and podman-remote do rootless containers.


If you are looking for a more flexible solution with support for any layout and support for animations, something like iOS' UICollectionView or Android's RecyclerView, have a look at https://github.com/turbolent/collection-view.


Impressive. What are the differences for decoding/load time? Especially on mobile devices this is a big deciding factor.


The paper (http://www.cs.cmu.edu/~nlao/publication/2014.kdd.pdf) mentions the extracted knowledge base is about 38 times larger than DeepDive's, the largest previous comparable system.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: