How to play live stream without interruption - javascript

I have a sound wave at certain time intervals (for example, once every second) that I want to play it live in the browser without interruption.
For example, suppose a function generates a sine wave every second (one second long) and I convert the generated wave into audio format and play it.
I need this sound to play seamlessly and without interruption. But when I add it to the media source buffer, there is a small interruption (a few milliseconds) between the two audio tracks.
How can I play a sound wave that I receive at specific time intervals in the browser without interruption?

Related

How to play the first ten seconds and then last ten seconds of a video with no delay?

I'm trying to figure out how to play a video with HTML5/JS but to play one part of the video and then another part of the video without any sort of delay to "seek" to that part of the video.
I've tried using different libraries like video.js to set currentTime but there is a delay for a second or so.
I'm looking to do this without any delay at all - seamlessly as if the two parts of the video were right next to each other.
I'm assuming this should be possible - even if by streaming chunks of the video - for example if I want to stream the chunks of the first 10 seconds then the chunks for the last 10 seconds.

Sequence musical event with Web Audio API

I would like to sequence musical events with web audio, to do a simple musical piece. I'm just gettin started with web audio & I'm looking for tips or pointers in the right direction to be able to sequence simple events one after the other. I know how to make a playable synth with web audio but what i want is to make a small composition:
Example of what I mean by sequencing events (or making a composition):
1- at time 0 : 1 oscillator is paying a part that will end after 3 minutes.
2- after 1 minute another oscillator starts another part that will stop after 30 sec.
3- after 2 minutes an audio file starts (let's say a prerecorded bass)
How do we sequence (start & stop) different parts at a different time ?
I know how to make a synth with web audio but i would like to time events in a timeline.
Where should I start ?
I guess my question is "what is the best way to trigger a function a certain amount of time after another one or when one is finished with Web Audio?"
thx!

Is there a way to reduce latency using getUserMedia?

While trying to reduce the video latency for a WebRTC communication, I measured the delay between the video capture and the video display.
To prevent measuring latency involved by WebRTC, I just used getUserMedia and an HTML video that displayed the stream.
I did it by displaying a timestamp every frame (using requestAnimationFrame), recording my screen with a USB camera and taking screenshots where both the video display and the displayed timestamp where visible.
On average, I measured a delay of ~150ms.
This must be an overestimation (due to requestAnimationFrame time between calls), however the minimum measure I made is around 120ms, that still a lot.
Now, is there a way to reduce this delay between the video capture and the video display ?
Note:
I tried using another video lector (window's built-in lector), and the measure were really close (average delay about 145ms)
I tried another video device (my laptop webcam with a mirror), and the results are less close but still elevated, on my opinion (average delay about 120ms)
In general this is something you can only fix in the browser itself.
The requestVideoFrameCallback API is gathering some numbers such as captureTime and renderTime. https://web.dev/requestvideoframecallback-rvfc/ has a pretty good description, https://webrtc.github.io/samples/src/content/peerconnection/per-frame-callback/ visualizes them.

webaudio API: adjust play length of an audio sample (eg "C5.mp3")?

Can I use a (soundfont) sample e.g. "C5.mp3" and extend or shorten its duration for a given time (without distorting the pitch)?
(Would be great if this was as easy as using an oscillator and change the timings of NoteOn and NoteOff, but with a more natural sound rather than sine waves)? (Can that be done easily without having to resort to MIDI.js or similar?)
You would either need to loop the mp3 or record a longer sample. Looping can be tricky to make seamless without clicks or pops though, and depending on the sample you would hear the attack of the note each time it looped.
.mp3s and other formats of recorded audio are all finite, predetermined sets of binary data. The reason it's so easy to manipulate sine waves with an oscillator is because the web audio api is dynamically generating the wave based on the input you're giving it.
Good soundfonts have loop points for endless sounds. See example for WebAudioFont
https://surikov.github.io/webaudiofont/examples/flute.html

Syncronizing Web Audio API and HTML5 video

I'm trying to synchronize an audio track being played via Web Audio API with a video being played in an HTML5 video element. Using a fibre optic synchronization device and audacity we can detect the drift between audio and video signals to a very high degree of accuracy.
I've tried detection the drift between the two sources and correcting it by either accelerating or decelerating the audio and as below just simply setting the video to the same position as the audio.
Play(){
//...Play logic
//Used to calculate when the track started playing, set when play is triggered.
startTime = audioContext.currentTime;
}
Loop(){
let audioCurrentTime = startTime - audioContext.currentTime;
if(videoElement.nativeElement.currentTime - audioCurrentTime > 0.1 ){
videoElement.nativeElement.currentTime = audioCurrentTime;
}
requestAnimationFrame(Loop);
}
With all of this, we still notice a variable drift between the two sources of around 40ms. I've come to believe that audioContext.currentTime does not report back accurately since when stepping through the code multiple loops will report back the same time even though quite obviously time has passed. My guess is the time being reported is the amount of the track that has been passed to some internal buffer. Is there another way to get a more accurate playback position from an audio source being played?
Edit: I've updated the code to be a little closer to the actual source. I set the time at which the play was initialized and compare that to the current time to see the track position. This still reports a time that is not an accurate playback position.

Categories

Resources