Hacker Newsnew | past | comments | ask | show | jobs | submit | danjl's commentslogin

> I was afraid the puzzle-solving was over. But it wasn't—it just moved up a level.

The craft can move up a level too. You still can make decisions about the implementation, which algorithms to use, how to combine them, how and what to test -- essentially crafting the system at a higher level. In a similar sense, we lost the hand-crafting of assembly code as compilers took over, and now we're losing the crafting of classes and algorithms to some extent, but we still craft the system -- what and how it does its thing, and most importantly, why.


Just saying "no" is unclear. LLMs are still very sensitive to prompts. I would recommend being more precise and assuming less as a general rule. Of course you also don't want to be too precise, especially about "how" to do something, which tends to back the LLM into a corner causing bad behavior. Focus on communicating intent clearly in my experience.

> Just saying "no" is unclear.

No.


Yes, more time on up front spec and plan building. Bite sized specifically to fit within the context window of a single implementation session. Each step should have a verification process that includes new tests.

Prior to each step, I prompt the AI to review the step and ask clarifying questions to fill any missing details. Then implement. Then prompt the AI after to review the changes for any fixes before moving on to the next step. Rinse, repeat.

The specs and plans are actually better for sharing context with the rest of the team than a traditional review process.

I find the code generated by this process to be better in general than the code I've generated over my previous 35+ years of coding. More robust, more complete, better tested. I used to "rush" through this process before, with less upfront planning, and more of a focus on getting a working scaffold up and running as fast as possible, with each step along the way implemented a bit quicker and less robustly, with the assumption I'd return to fix up the corner cases later.


I think that comment is interesting as well. My view is that there is a lot of Electron training code, and that helps in many ways, both in terms of the app architecture, and the specifics of dealing with common problems. Any new architecture would have unknown and unforeseen issues, even for an LLM. The AIs are exceptional at doing stuff that they have been trained on, and even abstracting some of the lessons. The further you deviate away from a standard app, perhaps even a standard CRUD web app, the less the AI knows about how to structure the app.


Thanks for the performance info! More recent Apple chips get much better performance. Also worth trying the Fast quality setting. Great suggestion about default camera positions. I'll add that to the to-do list. Love the idea of a blindfolded chess app with voice control.


Just wait until they piss off the AI


Just wait until they piss off the AI


Sad to see truth die at the whims of a billionaire


“Truth”

Lol. Lmao.


The lack of word wrapping, that we have grown so accustomed to on the Web, makes this article nearly illegible.


Fixed! Thanks for the heads-up. I'm used to writing in Korean where wrapping is rarely an issue, so I missed this during the English migration. It's updated now—thanks for the patience!


Love it. Double bonus points if we could somehow add hardware ray tracing, which is missing from the browser WebGPU implementations. I'll definitely give this a shot with my 3D interactive WebGPU chessboard. A desktop implementation opens up the possibility of publishing on steam and other services which all expect a desktop app.


So in theory it should be possible, but it might require customizing the Dawn or wgpu-native builds if they don't support it (this is providing the JS bindings / wrapper around those two implementations of wgpu.h). But I've already added a special C++ method to handle draco compression natively, adding some mystral native only methods is not out of the question (however, I would want to ensure that usage of those via JS is always feature flagged so that it doesn't break when run on web).

Did you write your WebGPU chessboard using the raw JS APIs? Ideally it should work, but I just fixed up some missing APIs to get Three.js working in v0.1.0, so if there are issues, then please open up an issue on github - will try to get it working so we close any gaps.


Here's a dawn implementation with support for ray tracing that was implemented a number of years ago but never integrated into browsers. Perhaps it will help?

https://github.com/maierfelix/dawn-ray-tracing

Yes, chessboard3d.app is written with raw JS APIs and raw WebGPU. It does use the rapier physics library, which uses WASM, which might be an issue? It implements its own ray tracing but would probably run 10x faster with hardware ray tracing support.

I think you'd get a lot of attention if you had hardware ray tracing, since that's only currently available in DirectX 12 and Vulkan, requiring implementation in native desktop platforms. FWIW, if the path looks feasible, I would be interested in contributing.


WASM shouldn't be an issue since the draco decoder uses it - but it may only work with V8 (for quickjs builds it wouldn't work, but the default builds use V8+dawn). Obviously with an alpha runtime, there may be bugs.

I think it would be super cool to have some sort of extension before WebGPU (web) has it. I was taking a look at the prior example & it seems like there's good ongoing discussion linked here about it: https://github.com/gpuweb/gpuweb/issues/535. Also I believe that Metal has hardware ray tracing support now too?

Re: Implementation, a few options exist - a separate Dawn fork with RT is one path (though Dawn builds are slow, 1-2 hours on CI). Another approach would be exposing custom native bindings directly from MystralNative alongside the WebGPU APIs - that might make iteration much faster for testing feasibility. The JS API would need to be feature-flagged so the same code gracefully falls back when running on web (did this for a native draco impl too that avoids having to load wasm: https://mystralengine.github.io/mystralnative/docs/api/nativ...).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: