My understanding, which is to be taken with a grain of salt, is that there's an additional constraint, not stated in the Scientific American article, that the plane curve be irreducible. The example of x^4 is reducible, it's x^2 * x^2 among other thing. The actual conjecture is expressed in terms of genus, but this follows from the genus-degree formula.
The reason for the confusion is that a smooth, projective plane curve of degree d has genus (d-1)(d-2)/2, which is 2 or greater starting at d=4. Hence the phrasing in the article, which is missing the “smooth, projective” hypothesis. The equation y = x^4 doesn’t define a smooth curve when extended to the projective plane, because it has a singularity at infinity.
Thanks btw for saying clearly that BIO is not suitable for DVI output. I was curious about this and was planning to ask on social media.
I've done some fun stuff in PIO, in particular the NRZI bit stuffing for USB (12Mbps max). That's stretching it to its limit. Clearly there will be things for which BIO is much better.
I suspect that a variant of BIO could probably do DVI by optimizing for that specific use case (in particular, configuring shifters on the output FIFO), but I'm not sure it's worth the lift.
USB 12Mbps is one of the envisioned core use cases - the Baochip doesn't have a host USB interface, so being able to emulate a full-speed USB host with a BIO core opens the possibility of things like having a keyboard that you can plug into the device. CAN is another big use case, once there is a CAN bus emulator there's a bunch of things you can do. Another one is 10/100Mbit ethernet - it's not fast - but good for extremely long runs (think repeaters for lighting protocols across building-scale deployments).
When considering the space of possibilities, I focused on applications that I could see there being actual product sold that rely upon the feature. The problem with DVI is that while it's a super-clever demo, I don't see volume products going to market relying upon that feature. The moment you connect to an external monitor, you're going to want an external DRAM chip to run the sorts of applications that effectively utilize all those pixels. I could be wrong and mis-judged the utility of the demo but if you do the analysis on the bandwidth and RAM available in the Baochip, I feel that you could do a retro-gaming emulator with the chip, but you wouldn't, for example, be replacing a video kiosk with the chip. Running DOOM on a TV would be cool, but also, you're not going to sell a video game kit that just runs DOOM and nothing else.
The good news is there's plenty of room to improve the performance of the BIO. If adoption is robust for the core, I can make the argument to the company that's paying for the tape-outs to give me actual back-end resources and I can upgrade the cores to something more capable, while improving the DMA bandwidth, allowing us to chase higher system frequencies. But realistically, I don't see us ever reaching a point where, for example, we're bit-banging USB high speed at 480Mbps - if not simply because the I/Os aren't full-swing 3.3V at that point in time.
My feeling about programmable IOs is they’re fun, but not the right choice for commodity high speed interfaces like USB. You obviously can make them work, but they’re large compared to what you would need for a dedicated unit. The DVI over PIO is a good example: showed something interesting (and that’s great!) but not widely useful. Also, a lot of protocols, even slow ones, have failure and edge cases that would need to be covered. Not to mention the physical characteristics, like you’ve said for high speed USB.
This is true, but only relevant if you order enough units (>100 k? Depending on price & margin of course) to customize your die. Otherwise, you have to find a chip with the I/Os that you want, all the rest being equal. Good luck with that if you need something specific (8 UARTs for instance) or obscure.
Yes, I can see BIO being really good at USB host. With 4k of SRAM I can see it doing a lot more of the protocol than just NRZI; easily CRC and the 1kHz SOF heartbeat, and I wouldn't be surprised if it could even do higher level things like enumeration.
You may be right about not much scope for DVI in volume products. I should be clear I'm just playing with RP2350 because it's fun. But the limitation you describe really has more to do with the architectural decision to use a framebuffer. I'm interested in how much rendering you can get done racing the beam, and have come to the conclusion it's quite a lot. It certainly includes proportional fonts, tiles'n'sprites, and 4bpp image decompression (I've got a blog post in the queue). Retro emulators are a sweet spot for sure (mostly because their VRAM fits neatly in on-chip SRAM), but I can imagine doing a kiosk.
Definitely agree that bit-banging USB at 480Mbps makes no sense, a purpose-built PHY is the way to go.
The clearance for AC8646 to land on runway 4 is given in a sequence starting at 4:58. "Vehicle needs to cross the runway" at 6:43. Truck 1 and company asks for clearance to cross 4 at 6:53. Clearance is granted at 7:00. Then ATC asks both a Frontier and Truck 1 to stop, voice is hurried and it's confusing.
Funny enough, the author of this blog post wrote another one on exactly that topic, entitled "What do executives do, anyway?"[1]. If you read it, you'll find it's written from quite an interesting perspective, not quite "fly on the wall," but perhaps as close as you're going to get in a realistic scenario.
Unfortunately, your complex script shaping for Arabic and Devanagari is wrong. The Arabic is missing the joining (all forms are isolated), and the Devanagari doesn't have the vowels combining (so you see those dotted circles).
To fix this you'll need Harfbuzz or something similar. Taking a quick look at the code, it seems like you're just doing a glyph at a time through the cmap. That, uh, won't do.
As the person who implemented GSUB support for Arabic in Prince (via the Allsorts Rust crate), this post highly intrigued me… especially because I wanted to see how they implemented GSUB for Opentype while being a film director and possibly stunt double on the side.
After seeing your comment, I’m saddened to see that OP and their comments in this threat are just bots.
You are completely right on all fronts. Thank you for taking a look at the code!
You hit the exact architectural bottleneck. Right now, the engine uses Intl.Segmenter to find the grapheme boundaries, but then it just does a direct cmap lookup to get the advance widths. It currently lacks a parser for the OpenType GSUB (Glyph Substitution) and GPOS (Glyph Positioning) tables, which is why Arabic defaults to isolated forms and Indic matras don't fuse.
The standard advice is exactly what you suggested: "just drop in HarfBuzz." But that creates an existential problem for this specific project. HarfBuzz is a massive C++ library. To run it in an Edge worker or pure V8 environment, I'd have to ship a WebAssembly binary that is often upwards of 1MB. That entirely defeats the purpose of building an 88 KiB, pure-JS, zero-dependency layout VM.
Doing complex text layout (CTL) and shaping purely in JavaScript without exploding the bundle size is essentially the final boss of this project. The roadmap is to either implement a highly tree-shakeable, pure-JS parser for the most critical GSUB/GPOS rules, or find a way to pre-compile shaping instructions.
For right now, it's a known trade-off: lightning-fast, edge-native pure JS layout, at the cost of failing on complex cursive ligatures. If you know of any micro-footprint pure-JS shaping libraries that don't rely on WASM, I am all ears!
And what's the point of being right when it's slow and bloated? Come on, it works for a lot of use cases, and it doesn't work for some. And it's still evolving.
All your comments here appear to be run through an ai engine, while you might think it makes you sounds better if English isn’t your native tongue, it just comes off as insincere, I’d rather read your bad grammar than feel like I’m communicating with a clanker.
I don't agree that the clothoid is a math nightmare. One of the central problems you have to solve for roads is the offset curve. And a clothoid is extremely unusual in that its offset curve has a clean analytic solution. This won't be the case for the cubic parabola (which is really just a special case of the cubic Bézier).
Sure, you have to have some facility with math to use clothoids, but I think the only other curve that will actually be simpler is circular arcs.
I mean they are not a math nightmare per se if you’re comfortable with the theory. What I meant is that they become comparatively complex to integrate into a system like this.
Think about arc length, compute intersections, reparametrization, etc., and with clothoids that usually means some complex numerical algorithms.
Using circular arcs or even simple third-degree polynomials (like cubic parabolas) reduces many of those operations to trivial O(1) function calls, which makes them much cheaper to evaluate and manipulate procedurally, especially when you're computing it 60 times per frame
Not true at all. I interacted with Meena[1] while I was there, and the publication was almost three years before the release of ChatGPT. It was an unsettling experience, felt very science fiction.
The surprise was not that they existed: There were chatbots in Google way before ChatGPT. What surprised them was the demand, despite all the problems the chatbots have. The pig problem with LLMs was not that they could do nothing, but how to turn them into products that made good money. Even people in openAI were surprised about what happened.
In many ways, turning tech into products that are useful, good, and don't make life hell is a more interesting issue of our times than the core research itself. We probably want to avoid the valuing capturing platform problem, as otherwise we'll end up seeing governments using ham fisted tools to punish winners in ways that aren't helpful either
The uptake forced the bigger companies to act. With image diffusion models too - no corporate lawyer would let a big company release a product that allowed the customer to create any image...but when stable diffusion et al started to grow like they did...there was a specific price of not acting...and it was high enough to change boardroom decisions
Right. The problem was that people under appreciated ‘alignment’ even before the models were big. And as they get bigger and smarter it becomes more of an issue.
Well, I must say ChatGPT felt much more stable than Meena when I first tried it. But, as you said, it was a few years before ChatGPT was publicly announced :)
On the official Kuycon site, it says "Since 2023, Kuycon has partnered exclusively with ClickClack.io to bring its innovative line of monitors to customers outside of China[...]". I'm seriously considering getting one of these.
I had one of these as a kid, actually on loan from another microcomputer enthusiast. My dad and I had soldered an SDK-85 kit (which I have) and we swapped that for the KIM-1 with another microcomputer enthusiast. It's the machine where I first started to learn programming, in machine code, entered in hex.
There's something really appealing about machines this simple which has been lost in the modern era. But this particular board was very limited, there wasn't a lot you could actually do with it.
reply