Syncronizing Web Audio API and HTML5 video - javascript

I'm trying to synchronize an audio track being played via Web Audio API with a video being played in an HTML5 video element. Using a fibre optic synchronization device and audacity we can detect the drift between audio and video signals to a very high degree of accuracy.
I've tried detection the drift between the two sources and correcting it by either accelerating or decelerating the audio and as below just simply setting the video to the same position as the audio.
Play(){
//...Play logic
//Used to calculate when the track started playing, set when play is triggered.
startTime = audioContext.currentTime;
}
Loop(){
let audioCurrentTime = startTime - audioContext.currentTime;
if(videoElement.nativeElement.currentTime - audioCurrentTime > 0.1 ){
videoElement.nativeElement.currentTime = audioCurrentTime;
}
requestAnimationFrame(Loop);
}
With all of this, we still notice a variable drift between the two sources of around 40ms. I've come to believe that audioContext.currentTime does not report back accurately since when stepping through the code multiple loops will report back the same time even though quite obviously time has passed. My guess is the time being reported is the amount of the track that has been passed to some internal buffer. Is there another way to get a more accurate playback position from an audio source being played?
Edit: I've updated the code to be a little closer to the actual source. I set the time at which the play was initialized and compare that to the current time to see the track position. This still reports a time that is not an accurate playback position.

Related

Is there a way to reduce latency using getUserMedia?

While trying to reduce the video latency for a WebRTC communication, I measured the delay between the video capture and the video display.
To prevent measuring latency involved by WebRTC, I just used getUserMedia and an HTML video that displayed the stream.
I did it by displaying a timestamp every frame (using requestAnimationFrame), recording my screen with a USB camera and taking screenshots where both the video display and the displayed timestamp where visible.
On average, I measured a delay of ~150ms.
This must be an overestimation (due to requestAnimationFrame time between calls), however the minimum measure I made is around 120ms, that still a lot.
Now, is there a way to reduce this delay between the video capture and the video display ?
Note:
I tried using another video lector (window's built-in lector), and the measure were really close (average delay about 145ms)
I tried another video device (my laptop webcam with a mirror), and the results are less close but still elevated, on my opinion (average delay about 120ms)
In general this is something you can only fix in the browser itself.
The requestVideoFrameCallback API is gathering some numbers such as captureTime and renderTime. https://web.dev/requestvideoframecallback-rvfc/ has a pretty good description, https://webrtc.github.io/samples/src/content/peerconnection/per-frame-callback/ visualizes them.

HTML5 video: determine the intended framerate (fps) of a video object?

If I have a HTML5 video object, I can get its properties such as duration. Is there also a way to get its intended framerate? (i.e. frames per second)
Note that I don't mean the actual playback rate that is attained by the browser when playing it, but the video's own native target framerate.
Or alternatively, is there a way to get the total framecount in the entire video? The number of frames divided by the duration would be what I need.

Persistent video stream with PHP / JS

I'm looking for a way to create a movie stream that works much like a TV channel. Basically the movies play constantly, whether people are viewing the page or not. Users might come to the page and the current movie is halfway through. Once one movie is done, the next one plays, and so on.
I can't figure out how to make this work, though. Right now I have the player working where every time the page is loaded a random video is chosen and played, but this isn't what I want. Does anyone have any ideas on how I might get this to work?
You need to keep a schedule of your video stream in a storage. This way you know which video is going to be played or be playing at any given time.
Now I know this is not the exact way a streaming service would work but it will get the job done for you. When a user loads a page your system computes which video should be playing at that time. Once the video is chosen, your system should compute the difference in the current time and the time when the video was supposed to start. Then you adjust the seek in your player i.e. you provide the start time of the video to the player as the time difference between the current time and the time when the video was supposed to start. This makes the video play from a position which is not necessarily its start.
Hope this helps.

Video does not play through even if enough content has been appended

I have a setup where I send a 10min long video (Elephants Dream) using the websockets protocol chunked in short segments of 4s each.
I use the browser as client, with the Websocket API to receive the content and the HTML5 Video Tag as player, to which I append the chunks as they come to the video using Media Source Extensions.
The thing is that there seems to be a limit somewhere (max receive buffer size, max mediasource sourcebuffer buffer size, max buffered content on video element, etc) so that the video does not play correctly to the end but stops earlier even if there is enough data.
All of the segments are arriving correctly and get appended in time. At the same time, the video starts playing back from the beginning.
You can see the grey line on the player showing buffered video grow until at some point in time where it stops growing and the video stops playing when getting to this position.
However, the full video has been appended to the mediasource element, regarding to the output messages, and which can also be tested by manually jumping to another position in future or past. It looks like there is always just a fraction of the content "loaded".
Since I'm testing it on localhost the throughput is very high so I tried lowering this to more common values (still good over video bitrate) to see if I'm overloading the client but this did not change anything.
Also tried different segment sizes, with exact same results, except for that the time in point where it stops is a different one.
Any idea on where this limitation can be or what may be happening?
I think you have a gap in the buffered data. Browsers have a limited buffer size to which you can append. When that limit is reached, if you append additional data, the browser will silently free some space by discarding some frames it does not need from the buffer. In my experience, if you happen too fast, you may end up with gaps in your buffer. You should monitor the status of the buffered attribute when appending to see if there is any gap.
Are you changing representations right before it stops? When you change representations, you need to append the init segment for the new representation before you append the next segment of the new representation.

Problems with audio in cocos2d-javascript on mobile devices

I'm trying to find some documentation for audio support for cocos2d-javascript.
Preloading audio and playing them with the standard method using mp3 or ogg formats:
var audioEngine = cc.AudioEngine.getInstance();
audioEngine.playEffect(s_sound);
// s_sound is a reference to a preloaded audio resource
works perfectly in all browsers. But when you load the browser on a device, let's say iphone 4s retina, no audio plays, or at least it seems no audio plays for sounds longer than a few seconds. I haven't found anything stating specifically what limitations there may be in device support, nor do I see an attempt to resolve this in anyone's example games like Moon Warriors - which also does not play audio on the iphone 4s.
I am attempting to play multiple sounds simultaneously.
Each sound having problems is over 15 seconds long and is a file greater than 500k
All audio is definitely loaded since the game does not appear unless they are
Each audio track is a layer for the background music of a guitar hero type game. This is why they are over 500k and longer than 15 seconds.
Perhaps someone here has had similar issues and may know of a way to guarantee audio plays on mobile devices?
The acceptable answer thus far is to us Howler.js and HowlerAudioEngine.js
I load both files in my script loader, then modified Platform/HTML5/CocosDenshion/SimpleAudioEngine.js -
cc.AudioEngine.getInstance = function () {
if (window.devicePixelRatio > 1) {
if (!this._instance) {
this._instance = new cc.HowlerAudioEngine();
this._instance.init();
}
} else {
if (!this._instance) {
this._instance = new cc.AudioEngine();
this._instance.init();
}
}
return this._instance;
};
Notice window.devicePixelRatio > 1 where I detect retina display or hd in general. This can be substituted with any proven variety of detecting "mobile" - However, sys.platform and cc.config.deviceType always returned "browser" for me so I resorted to pixel ratio for now since my testing device will return true for that.
The cons, however, are that there is still substantial delay in the delivering of sound fx. I can compensate for that for the most part, so this is better than nothing. Also, I haven't found the tolerance threshold for audio files (play time or data length). I do know that my longer tracks of about 3 minutes don't play even in howler, but my 1 minute tracks do.
If you have anything that is better, more reliable or just an addition to this, please post it. There isn't enough support for cocos2d-javascript yet so everything helps.

Categories

Resources