Probably never had to work with (live) video at all? I think using moq is the dream for anyone who does. The alternatives—DASH, HLS, MSE, WebRTC, SRT, etc.— are all ridiculously fussy and limiting in one way or another, where QUIC/WebTransport and WebCodecs just give you the primitives you want to use as you choose, and moq appears focused on using them in a reasonable, CDN-friendly way.
Very cool result, but I'm struggling to understand the baseline: what does "TCP + application FEC" mean? If everything is one TCP stream, and thus the kernel delivers bytes to the application strictly in order, what does application FEC accomplish? Or is it distributed across several TCP streams?
I wouldn't say I'm done evaluating it, and as a spare-time project, my NVR's needs are pretty simple at present.
But WebCodecs is just really straightforward. It's hard to find anything to complain about.
If you have an IP camera sitting around, you can run a quick WebSocket+WebCodecs example I threw together: <https://github.com/scottlamb/retina> (try `cargo run --package client webcodecs ...`). For one of my cameras, it gives me <160ms glass-to-glass latency, [1] with most of that being the IP camera's encoder. Because WebCodecs doesn't supply a particular jitter buffer implementation, you can just not have one at all if you want to prioritize liveness, and that's what my example does. A welcome change from using MSE.
Skipping the jitter buffer also made me realize with one of my cameras, I had a weird pattern where up to six frames would pile up in the decode queue until a key frame and then start over, which without a jitter buffer is hard to miss at 10 fps. It turns out that even though this camera's H.264 encoder never reorders frames, they hadn't bothered to say that in their VUI bitstream restrictions, so the decoder had to introduce additional latency just in case. I added some logic to "fix" the VUI and now its live stream is more responsive too. So the problem I had wasn't MSE's fault exactly, but MSE made it hard to understand because all the buffering was a black box.
Opened the link. Saw my own comment. I'm still as confused today as I was then about how this was ever supposed to work—either the quoted code is wrong or there's some weird unstated interface contract. I gather from other issues the maintainers are uninterested in a semver break any time soon. Unsure if they'd accept a performance regression (even if it makes the thing actually work). So I feel stuck. In the meantime, I don't use per-layer filtering. That's a trap.
I've got a whole list of puzzling bugs in the tracing <-> opentelemetry <-> datadog linkage.
Agree, and I would add that a bad abstraction, the wrong abstraction for the problem, and/or an abstraction misused is far worse than no abstraction. That was bugging me in another thread earlier today: <https://news.ycombinator.com/item?id=47350533>
I'm not sure Rust's `async fn` desugaring (which involves a data structure for the state machine) is inlineable. (To be precise: maybe the desugared function can be inlined, but LLVM isn't allowed to change the data structure, so there may be extra setup costs, duplicate `Waker`s, etc.) It's probably true that there is a performance cost. But I agree with the article's point that it's generally insignificant.
For non-async fns, the article already made this point:
> In release mode, with optimizations enabled, the compiler will often inline small extracted functions automatically. The two versions — inline and extracted — can produce identical assembly.
I am fairly doubtful that it makes sense to be using async function calls (or waits) inside of a hot loop in Rust. Pretty much anything you'd do with async in Rust is too expensive to be done in a genuinely hot loop where function call overhead would actually matter.
> The family has proof of residence (which is its own absurdity we won't discuss), and this third party can arbitrarily override that based on a black box argument.
Doesn't the family have a very straightforward libel claim against the third party? That the car was parked elsewhere may be true. "Although you are the owner on record of a house in our district boundaries, your license plate recognition shows that is not the place where you reside" is a statement the family can disprove in court (to a civil standard) and demonstrate has financially damaged them ("her daughter is currently attending a private school 45 minutes away from her home"). If that statement came from the third party (rather than the school district misinterpreting the raw data themselves), the family will win. The straightforward financial damages (let alone anything pain / suffering / punitive damages) likely exceed the company's payment from the school district ("a total of $41,904 for a 36-month-long contract"). It wouldn't take many of these claims before the company becomes insolvent, and good riddance.
I'd also expect them to win a lawsuit against the school district for falsely denying the basic right of education. Perhaps the individual school administrator also for libel. With any luck, a total legal bloodbath that warns any other school districts away from this conduct.
That depends if the third party makes the claim of non-residence and how they make it, and if they disclaim warranty and reliance. I can show you a site with some graphs and data of who is parked where and when and how often; I doubt they're directly saying, "This person definitely doesn't live at this residence, so deny her child entry."
That distinction is what I was getting at with "if that statement came from the third party (rather than the school district misinterpreting the raw data themselves)".
If the company just provided the raw data, they may be in better legal shape. But I'd say either they or the school administrator libeled the family. Maybe both. (Of course, I'm not a lawyer.) Even if the company did provide only the raw data, I wonder if libel is somehow implied in its contracted/intended use. And I'm really hoping for the legal bloodbath outcome, because this is unconscionable.
The family may not have time or money to pursue this, but there are lawyers who work on contingency or even pro bono, including the ACLU.
Props for identifying the issue immediately, but armed with that knowledge, why not redo the benchmark on a different instance type that has local storage? E.g. why not try a `c8id.2xlarge` or `c8id.4xlarge` (which bracket the `c6a.4xlarge`'s cost)?
IMO, there are a lot of smells in this code not addressed in the article. I only skimmed, and still, here are a few:
1. They represent a single room change with this sequence of three operations:
VectorDiff::Set { index: 3, value: new_room } because of the new “preview”,
VectorDiff::Remove { index: 3 } to remove the room… immediately followed by
VectorDiff::PushFront { value: new_room } to insert the room at the top of the Room List.
and I don't see any mention of atomic sequences. I think the room will momentarily disappear from view before being placed into the correct spot. That kind of thing would drive me nuts as a user. It suggests to me this is not the right abstraction.
Also, if you are actually representing the result with a vector, it's O(n), so from a performance perspective, it's not great if the vector can be large: you're shifting everything from [3, n) one spot forward and then one spot back, unnecessarily. If there were a `VectorDiff::Move`, you'd only be shifting 3 elements (the distance moved). Could still be the full length of the list but probably usually not? Something like a `BTreeSet` would make it actually O(lg n).
2. Taking a lock in a comparison function (they call it `Sorter`, but the name is wrong) is a smell for correctness as well as performance. Can the values change mid-sort? Then the result is non-deterministic. (In C++ it's actually undefined behavior to use a non-deterministic comparator. In Rust it's safe but still a bad idea.) You just can't sort values while they're changing, full stop, so inner mutability in a list you're sorting is suss. [edit: and for what? within a client, are you seriously doing heavy mutations on many rooms at once? or is a single lock on all the rooms sufficient?]
3. The sorted adapter just degrades to insertion sort of changes right here: <https://docs.rs/eyeball-im-util/0.10.0/src/eyeball_im_util/v...> and decomposes what could have been an atomic operation (append) into several inserts. Even `Set` does a linear scan and then becomes a (non-atomic again) remove and an insert, because it can change the sort order.
4. The `.sort_by(new_sorter_lexicographic(vec![Box(...), Box(...), Box(...)]))` means that it's doing up to three dynamic dispatches on each comparison. The `new_sorter_lexicographic` is trivial, so inline those instead. And definitely don't take a separate lock on each, yuck, although see above anyway about how you just shouldn't have locks within the vec you're sorting.
5. In their "dessert" section, they talk about a problem with sort when the items are shallow clones. It's an example of a broader problem: they put something into an `ObservableVector` but then semantically mutate it via inner mutability (defeating the "observable"). You just can't do that. The sort infinite loop is the tip of the iceberg. Everything relying on the observable aspect is then wrong. The lesson isn't just "jumping on an optimization can lead to a bug"; it's also that abstractions have contracts.
Probably for ints unconditionally. For floats in Sesse__'s example without `-ffast-math`, I count 10 muls, 2 muladds, 1 add. With `-ffast-math`, 1 mul, 3 muladds. <https://godbolt.org/z/dPrbfjzEx>
Probably never had to work with (live) video at all? I think using moq is the dream for anyone who does. The alternatives—DASH, HLS, MSE, WebRTC, SRT, etc.— are all ridiculously fussy and limiting in one way or another, where QUIC/WebTransport and WebCodecs just give you the primitives you want to use as you choose, and moq appears focused on using them in a reasonable, CDN-friendly way.
reply