Hacker Newsnew | past | comments | ask | show | jobs | submit | jsmith45's commentslogin

Chrome desktop has just landed enabled by default native HLS support for the video element within the last month. (There may be a few issues still to be worked out, and I don't know what the rollout status is, but certainly by year end it will just work). Presumably most downstream chromium derivatives will pick this support up soon.

My understanding is that Chrome for Android has supported it for some time by way of delegating to android's native media support which included HLS.

Desktop and mobile Safari has had it enabled for a long time, and thus so has Chrome for iOS.

So this should eventually help things.


Chrome has finally just landed enabled by default native HLS playback support within the past month. See http://crrev.com/c/7047405

I'm not sure what the rollout status actually is at the moment.


> See go/hls-direct-playback for design and launch stats analysis.

Is that an internal Google wiki or something? I can't find whatever they're referring to.


> COM is just 3 predefined calls in the virtual table.

COM can be as simple as that implementation side, at least if your platforms vtable ABI matches COM's perfectly, but it also allows far more complicated implementations where every implemented interface queried will allocate a new distinct object, etc.

I.E. even if you know for sure that the object is implemented in c++, and your platforms' vtable ABI matches COM's perfectly, and you know exactly what interfaces the object you have implements, you cannot legally use dynamic_cast, as there is no requirement that one class inherits from both interfaces. The conceptual "COM object" could instead be implemented as one class per interface, each likely containing a pointer to some shared data class.

This is also why you need to do the ref counting with respect to each distinct interface, since while it is legal from an implementation side to just share one ref count for it all, that is in no way required.


Note that VST3 doesn't implement the COM vtable layout, their COM-like FUnknown really is just 3 virtual methods and a bunch of GUIDs. They rely on the platform's C++ ABI not breaking.

You're right that QueryInterface can return a different object, but that doesn't make it significantly more complicated, assuming you're not managing the ref-counts manually.


Yes, ironically the COM in AAX and COM in VST3 have slightly different layouts.


BBFC's rulings have legal impact, and they can refuse classification making the film illegal show or sell in the UK.

over in the US, getting an MPAA rating is completely voluntary. MPAA rules do not allow it to refuse to rate a motion picture, and even if they did, the consequences would be the same as choosing not to get a rating.

If you don't get a rating in the US, some theatres and retailers may decline to show/sell your film, but you can always do direct sales, and/or set up private showings.


Yeah, proving correct is not a panacea. If you have C code that has been proven correct with respect to what the C Standard mandates (and some specific values of implementation defined limits), that is all well and good.

But where is the proof that your compiler will compile the code correctly with respect to the C standard and your target instruction set specification? How about the proof of correctness of your C library with respect to both of those, and the documented requirements of your kernel? Where is the proof that the kernel handles all programs that meet it documented requirements correctly?

Not to point too fine a point on it, but: where is the proof that your processor actually implements the ISA correctly (either as documented, or as intended, given that typos in the ISA documentation are that THAT rare)? This is very serious question! There have been a bunch of times that processors have failed to implement the ISA spec is very bad and noticeable ways. RDRAND has been found to be badly broken many times now. There was the Intel Skylake/Kaby Lake Hyper-Threading Bug that needed microcode fixes. And these are just some of the issues that got publicized well enough that I noticed them. Probably many others that I never even heard about.


I'm confused by your perspective.

The simplest (and arguably best) usage for a devcontainer is simply to set up a working development environment (i.e. to have the correct version of the compiler, linter, formatters, headers, static libraries, etc installed). Yes, you can do this via non-integrated container builds, but then you usually need to have your editor connect to such a container, so the language server can access all of that, plus when doing this manually you need to handle mapping in your source code.

Now, you probably want to have your main Dockerfile set up most of the same stuff for its build stage, although normally you want the output stage to only have the runtime stuff. For interpreted languages the output stage is usually similar to the "build" stage, but out to omit linters or other pure development time tooling.

If you want to avoid the overlap between your devcontainer and your main Dockerfile's build stage? Good idea! Just specify a stage in your main Dockerfile where you have all development time tooling installed, but which comes before you copy your code in. Then in your .devcontainer.json file, set the `build.dockerfile` property to point at your Dockerfile, and the `build.target` to specify that target stage. (If you need some customizations only for dev container, your docker file can have a tiny otherwise unused stage that derives from the previous one, with just those changes.)

Under this approach, the devcontainer is supposed to be suitable for basic development tasks (e.g. compiling, linting, running automated tests that don't need external services.), and any other non-containerized testing you would otherwise do. For your containerized testing, you want the `ghcr.io/devcontainers/features/docker-outside-of-docker:1` feature added, at which point you can just use just run `docker compose` from the editor terminal, exactly like you would if not using dev containers at all.


Might be worth checking out Tidal's Mondo Notation, which while not quite Haskell syntax is far closer to it, being a proper functional style notion, that unifies with mini notation, so no need for wrapping many things in strings.

Looks like this:

    mondo`
    $ note (c2 # euclid <3 6 3> <8 16>) # *2 
    # s "sine" # add (note [0 <12 24>]*2)
    # dec(sine # range .2 2) 
    # room .5
    # lpf (sine/3 # range 120 400)
    # lpenv (rand # range .5 4)
    # lpq (perlin # range 5 12 # \* 2)
    # dist 1 # fm 4 # fmh 5.01 # fmdecay <.1 .2>
    # postgain .6 # delay .1 # clip 5

    $ s [bd bd bd bd] # bank tr909 # clip .5
    # ply <1 [1 [2 4]]>

    $ s oh*4 # press # bank tr909 # speed.8
    # dec (<.02 .05>*2 # add (saw/8 # range 0 1)) # color "red"
    `
If actual tidal notation is important, that has been worked on, and would look like:

    await initTidal()
    tidal`
    d1 
    $ sub (note "12 0")
    $ sometimes (|+ note "12")
    $ jux rev $ voicing $ n "<0 5 4 2 3(3,8)/2>*8"
    # chord "<Dm Dm7 Dm9 Dm11>"
    # dec 0.5 # delay 0.5 # room 0.5 # vib "4:.25"
    # crush 8 # s "sawtooth" # lpf 800 # lpd 0.1
    # dist 1

    d2 
    $ s "RolandTR909_bd*4, hh(10,16), oh(-10,16)"
    # clip (range 0.1 0.9 $ fast 5 $ saw)
    # release 0.04 # room 0.5
    `
Only the actually implemented functions, and implemented custom operators are available even when that works, so not all tidal code can necessarily be imported.

But it is currently broken on the REPL site because of https://codeberg.org/uzu/strudel/pulls/1510 and https://codeberg.org/uzu/strudel/issues/1335


Phonics based reading is all about sounding out unknown words. The idea is that the student would understand if somebody else read the text out loud, so if we can teach the kids how to convert the written words into sounds, they can understand many new words they first come across. The core idea is to teach the kids that certain letters or groups of letters map to certain sounds (phonemes) at a start, and then gradually introduce more and more rules of English phonetics, allowing students to successfully learn to sound out even more complicated words.

The hope is that students will gradually learn to just recognize words by sight, which the overwhelming majority do eventually learn to do, and just need to sound out unfamiliar words. The fact that some students have struggled to learn to recognize words and need to sound most out is part of why people try to create alternatives, but those largely don't work well.

Of course, English does have some tricky phonetics. We have some words with multiple different pronunciations. We have some words with the same phonemes but different meanings that differ solely based on syllable stress. There are even some words whose pronunciation simply must be memorized, as there is no coherent rule to get from the word to the pronunciation (see for example Colonel).


My view, which I suspect even Toub would agree with is that if being allocation free or even just extremely low allocation is critical to you, then go ahead and use structure and stackalloc, etc that guarentee no allocations.

It is far more guarenteed that that will work in all circumstances than these JIT optimizations, which could have some edge cases where they won't function as expected. If stopwatch allocations were a major concern (as opposed to just feeling like a possible perf bottleneck) then a modern ValueStopwatch struct that consists of two longs (accumulatedDuration, and startTimestamp, which if non-zero means the watch is running) plus calling into the stopwatch static methods is still simple and unambiguous.

But in cases where being low/no allocation is less critical, but your are still concerned about the impacts of the allocations, then these sort of optimizations certainly do help. Plus they even help when you don't really care about allocations, just raw perf, since the optimizations improve raw performance too.


I can get by with a weakly typed language for a small program I maintain myself, but if I am making something like a library, lack of type checking can be a huge problem.

In something like JavaScript, I might write a function or class or whatever with the full expectation that some parameter is a string. However, if I don't check the runtime type, and throw if it is unexpected, then it is very easy for me to write this function in a way where it currently technically might work with some other datatype. Publish this, and some random user will likely notice this and that using that function with an unintended datatype.

Later on I make some change that relies on the parameter being a string (which is how I always imagined it), and publish, and boom, I broke a user of the software, and my intended bugfix or minor point release was really a semver breaking change, and I should have incremented the major version.

I'd bet big money that many JavaScript libraries that are not fanatical about runtime checking all parameters end up making accidental breaking changes like that, but with something like typescript, this simply won't happen, as passing parameters incompatible with my declared types, although technically possible, is obviously unsupported, and may break at any time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: