JAX is designed from the start to fit well with systolic arrays (TPUs, Nvidia's tensor cores, etc), which are extremely energy-efficient. WebGL won't be the tool that connects it on the web, but the generation after WebGPU will.
I believe we could get there eventually. For example for collision there is work to make it differentiable (or use a local surrogate at the collision point):
https://arxiv.org/abs/2207.00669
The robotics will need to connect vision with motors with haptics with 3D modelling. And to propagate gradient seamlessly. For calibrating torque with the the elastic deformation of the material for example. After all matter is not discreet at small scales (staying above the atomic scale)
All this will require all modules to be compatible with differentiability. It'll be expensive at first, but I'm sure some optimizations can get us close to the discreet case.
Also even for meshes there is a lot to gain with trying to go the continuous way:
I had a lot of fun writing the article! And it is only half a joke
My intuition for so-called world models is that we'll have to plug modules, each responsible for a domain (text, video, sound, robot-haptics, physical modelling) It'll require to plug modules in a way that will allow the gradient to propagate. A differentiable architecture. And JAX seems well placed for this by making function manipulation a first citizen. Looking at your testimony comforts me in this view
It is AI generated. Or was written by someone a bit far from the technical advances IMHO. The Johnson-Lindenstrauss Lemma is a very specific and powerful concept, when in the article the QLJ explanation is vacuous. A knowledgeable human would not have left the reader wanting for how that relates to the Lemma.
Honestly, the bigger miss is people treating JL as some silver bullet for "extreme" compression, as if preserving pairwise distances for a fixed point set somehow means you still keep the task-relevant structure once you're dealing with modern models.
Try projecting embeddings this way and watch your recall crater the moment you need downstream task performance instead of nearest-neighbor retreival demos. If you're optimizing for blog post vibes instead of anything measurable sure, call it a breakthrough.
> the whole process remains differentiable: we can even propagate gradients through the computation itself. That makes this fundamentally different from an external tool. It becomes a trainable computational substrate that can be integrated directly into a larger model.
IMHO the key point at which this technique has an unfair advantage vs a traditional interpreter is here.
How disruptive is it to have differentiability? To me it would mean that some tweaking-around can happen in an LLM-program at train-time; like changing a constant, or switching from a function call to another function. Can we gradient-descent effectively inside this huge space? How different is it from tool-calling from a pool of learned programs (think github but for LLM programs written in classic languages)?
If you only care about rotations in 3d, quaternions do everything you need :) with all the added benefits of having a division algebra to play with (after all the cross product is a division-algebraic operation). PGA is absolutely great, but quite a bit more complex mathematically, and its spinors are not as obvious as quaternionic ones. in addition GA is commonly taught in a very vector-brained way, but i find spinors much easier to deal with.
I'm not well versed into RF physics. I had the feeling that light-wave coherency in lasers had to be created at a single source (or amplified as it passes by). That's the first time I hear about phased-array lasers.
The beam is split and re-emitted in multiple points. By controlling the optical length (refractive index, or just the length of the waveguide by using optical junctions) of the path that leads to each emitter, the phase can be adjusted.
In practice, this can be done with phase change materials (heat/cool materials to change their index), or micro ring resonators (to divert light from one wave guide to another).
The beam then self-interferes, and the resulting interference pattern (constructive/destructive depending on the direction) are used to modulate the beam orientation.
You are right that a single source is needed, though I imagine that you can also use a laser source and shine it at another "pumped" material to have it emit more coherent light.
I've been thinking about possible use-cases for this technology besides LIDAR,. Point to point laser communication could be an interesting application: satellite-to-satellite communication, or drone-to-drone in high-EMI settings (battlefield with jammers). This would make mounting laser designators on small drones a lot easier. Here you go, free startup ideas ;)
In principle, as the sibling comment says, you could measure just the phase difference on the receiver end. The trick is that it's much harder for light frequencies than radar. I'm non even sure we can measure the phase etc of a light beam, and if we could, the Nyquist frequency is incredibly high - 2x frequency takes us to PHz frequencies.
There might be something cute you can do with interference patterns but no idea about that. We do sort of similar things with astronomic observations.
A phased array is an antenna composed of multiple smaller antennas within the same plane that can constructively/destructively aim its radio beam within any direction it is facing. I'm no radio engineer but I think it works via an interference pattern being strongest in the direction you want the beam aimed. This is mostly used in radar arrays though I suppose it could work with light too since it is also a wave.
Not an expert, but main challenges with laser coherency are present when shaping the output using multiple transmitters.
For lidar you transmit a pulse from a single source and receive its reflection at multiple points. Mentioning phased array with lidar almost always means receiving.
I think about it like a series of waves in a pool. One end has wave generators (the lasers) spaced appropriately such that resulting waves hitting the other end interfere just right and create a unified wavefront (same phase, amplitude, frequency).
A hard problem, especially wrt to transactions on a moving target.
From memory, handful of projects just dedicated to this dimension of databases: Noria, Materialize, Apache Flink, GCP's Continuous Queries, Apache Spark Streaming Tables, Delta Tables, ClickHouse streaming tables, TimescaleDB, ksqlDB, StreamSQL; and dozens more probably. IIRC, since this is about postgres, there is recently created extension trying to deal with this: pg_ivm
reply