Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t know all the details for this particular issue, but, in general, knowing the latency can be important. Imagine you’re displaying frames and playing sound together and you want them synchronized. You need some way to submit graphics frames and audio so that they arrive at the same time.

The browser could always increase the apparent audio latency by buffering, but that reduces the ability of music apps to perform well.



What I'm suggesting is the code in the app doesn't work out how to submit graphics frames and audio so that they arrive at the same time in the browser, but that the app's code tells the browser how to sync up the graphics frames and audio that it receives.

Move responsibility for the syncing to the browser and then the app doesn't need to know anything. In short, I can put it together and send it to you, or I can send you the bits and tell you how they should be put together.


Can you explain how this would actually work?

A graphics frame appears on the screen at a specific time. (For VR, it is a definite time, and this is critical. For normal video or games, a little bit of slop, maybe a few ms, is probably okay.)

For audio, humans are sensitive to 10 ms deviations or even less.

Any API that works decently will need to synchronize audio and video, so there needs to be a way for a program to say “this audio sample should play are the same time as this video frame is shown”. But an API should also allow programs to react as quickly as possible to user input. And Bluetooth headphones, in particular, have very, very high latency.

So designing an API that performs well without revealing the latency is hard.

I do think it would be good to cleanly separate normal web pages and games, though. For pure content, none of this matters except that video needs to maintain synchronization. But normal content does not need clicks to translate quickly to video changes.


I don't know about you but I find everything about computers is hard, that doesn't mean there aren't better or worse solutions.

The browser is the presentation layer, it needs to know the latency of your headphones (or the system does). Why does the content provider need it? What's wrong with "here is frame A, please play audio A at the same time (while taking into account the latency that only you know about)" as a request?


Because, if the audio latency significantly exceeds the video latency, then the browser can’t do this without delaying the video.


It's simply moving responsibility from one entity to another, there's no technical reason that the content provide would be better at syncing the two, just as with any other network communication.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: