Playing Audio Base64s With No Delay - javascript

I have an Object with couple of base64s (Audio) inside. The base64s will start to play with a keydown event. In some situations (when the Base64 size is a little high), a delay will occur before playing. Is there any way to remove this delay, or at least reduce it?
App Witten in JavaScript And Running On Electron
//audio base64s object
var audio = {A: new Audio('base64[1]'), B: new Audio('base64[2]'), C: new Audio('base64[3]')};
//audio will start plying with key down
function keydown(ev) {
if (audio[String.fromCharCode(ev.keyCode)].classList.contains('holding') == false) {
audio[String.fromCharCode(ev.keyCode)].classList.add('holding');
if (audio[String.fromCharCode(ev.keyCode)].paused) {
playPromise = audio[String.fromCharCode(ev.keyCode)].play();
if (playPromise) {
playPromise.then(function() {
setTimeout(function() {
// Follow up operation
}, audio.duration * 1000);
}).catch(function() {
// Audio loading failure
});
} else {
audio[String.fromCharCode(ev.keyCode)].currentTime = 0;
}
}
}

I wrote up a complete example for you, and annotated below.
Some key takeaways:
If you need any sort of expediency or control over timing, you need to use the Web Audio API. Without it, you have no control over the buffering or other behavior of audio playback.
Don't use base64 for this. You don't need it. Base64 encoding is a method for encoding binary data into a text format. There is no text format here... therefore it isn't necessary. When you use base64 encoding, you add 33% overhead to the storage, you use CPU, memory, etc. There is no reason for it here.
Do use the appropriate file APIs to get what you need. To decode an audio sample, we need an array buffer. Therefore, we can use the .arrayBuffer() method on the file itself to get that. This retains the content in binary the entire time and allows the browser to memory-map if it wants to.
The code:
const audioContext = new AudioContext();
let buffer;
document.addEventListener('DOMContentLoaded', (e) => {
document.querySelector('input[type="file"]').addEventListener('change', async (e) => {
// Start the AudioContext, now that we have user ineraction
audioContext.resume();
// Ensure we actually have at least one file before continuing
if ( !(e.currentTarget.files && e.currentTarget.files[0]) ) {
return;
}
// Read the file and decode the audio
buffer = await audioContext.decodeAudioData(
await e.currentTarget.files[0].arrayBuffer()
);
});
});
document.addEventListener('keydown', (e) => {
// Ensure we've loaded audio
if (!buffer) {
return;
}
// Create the node that will play our previously decoded buffer
bufferSourceNode = audioContext.createBufferSource();
bufferSourceNode.buffer = buffer;
// Hook up the buffer source to our output node (speakers, headphones, etc.)
bufferSourceNode.connect(audioContext.destination);
// Adjust pitch based on the key we pressed, just for fun
bufferSourceNode.detune.value = (e.keyCode - 65) * 100;
// Start playing... right now
bufferSourceNode.start();
});
JSFiddle: https://jsfiddle.net/bradisbell/sc9jpxvn/1/

Related

Fastest way to capture image from a HTML Canvas

Here's my code to capture an image from a Canvas playing video:
let drawImage = function(time) {
prevCtx.drawImage(videoPlayer, 0, 0, w, h);
requestAnimationFrame(drawImage);
}
requestAnimationFrame(drawImage);
let currIndex = 0;
setInterval(function () {
if(currIndex === 30) {
currIndex = 0;
console.log("Finishing video...");
videoWorker.postMessage({action : "finish"});
} else {
console.log("Adding frame...");
// w/o this `toDataURL` this loop runs at 30 cycle / second
// so that means, this is the hot-spot and needs optimization:
const base64img = preview.toDataURL(mimeType, 0.9);
videoWorker.postMessage({ action: "addFrame", data: base64img});
currIndex++;
}
}, 1000 / 30)
The goal is at each 30 frames (which should be at 1 second) it would trigger to transcode the frames added.
The problem here is that the preview.toDataURL(mimeType, 0.9); adds at least 1 second, without it the log shows the currIndex === 30 gets triggered every second. What would be the best approach to be able to capture at least about 30 FPS image. What is the fastest way to capture image from a HTML Canvas that it will not be the bottleneck of real-time video transcoding process?
You should probably revise your project, because saving the whole video as still images will blow out the memory of most devices in no time. Instead have a look at MediaStreams and MediaRecorder APIs, which are able to do the transcoding and compression in real time. You can request a MediaStream from a canvas through its captureStream() method.
The fastest is probably to send an ImageBitmap to your Worker thread, these are really fast to generate from a canvas (simple copy of the pixel buffer), and can be transferred to your worker script, from where you should be able to draw it on a an OffscreenCanvas.
Main drawback: it's currently only supported in latest Chrome and Firefox (through webgl), and this can't be polyfilled...
main.js
else {
console.log("Adding frame...");
const bitmap = await createImageBitmap(preview);
videoWorker.postMessage({ action: "addFrame", data: bitmap }, [bitmap]);
currIndex++;
}
worker.js
const canvas = new OffscreenCanvas(width,height);
const ctx = canvas.getContext('2d'); // Chrome only
onmessage = async (evt) => {
// ...
ctx.drawImage( evt.data.data, 0, 0 );
const image = await canvas.convertToBlob();
storeImage(image);
};
An other option is to transfer an ImageData data. Not as fast as an ImageBitmap, it still has the advantage of not stopping your main thread with the compression part and since it can be transferred, the message to the Worker isn't computation heavy either.
If you go this road, you may want to compress the data using something like pako (which uses the compression algorithm used by PNG images) from your Worker thread.
main.js
else {
console.log("Adding frame...");
const img_data = prevCtx.getImageData(0,0,width,height);
videoWorker.postMessage({ action: "addFrame", data: img_data }, [img_data.data]);
currIndex++;
}
worker.js
onmessage = (evt) => {
// ...
const image = pako.deflate(evt.data.data); // compress to store
storeImage(image);
};

Change playout delay in WebRTC stream

I'm trying to cast a live MediaStream (Eventually from the camera) from peerA to peerB and I want peerB to receive the live stream in real time and then replay it with an added delay. Unfortunately in isn't possible to simply pause the stream and resume with play since it jump forward to the live moment.
So I have figured out that I can use MediaRecorder + SourceBuffer rewatch the live stream. Record the stream and append the buffers to MSE (SourceBuffer) and play it 5 seconds later.
This works grate on the local device (stream). But when I try to use Media Recorder on the receivers MediaStream (from pc.onaddstream) is looks like it gets some data and it's able to append the buffer to the sourceBuffer. however it dose not replay. sometime i get just one frame.
const [pc1, pc2] = localPeerConnectionLoop()
const canvasStream = canvas.captureStream(200)
videoA.srcObject = canvasStream
videoA.play()
// Note: using two MediaRecorder at the same time seem problematic
// But this one works
// stream2mediaSorce(canvasStream, videoB)
// setTimeout(videoB.play.bind(videoB), 5000)
pc1.addTransceiver(canvasStream.getTracks()[0], {
streams: [ canvasStream ]
})
pc2.onaddstream = (evt) => {
videoC.srcObject = evt.stream
videoC.play()
// Note: using two MediaRecorder at the same time seem problematic
// THIS DOSE NOT WORK
stream2mediaSorce(evt.stream, videoD)
setTimeout(() => videoD.play(), 2000)
}
/**
* Turn a MediaStream into a SourceBuffer
*
* #param {MediaStream} stream Live Stream to record
* #param {HTMLVideoElement} videoElm Video element to play the recorded video in
* #return {undefined}
*/
function stream2mediaSorce (stream, videoElm) {
const RECORDER_MIME_TYPE = 'video/webm;codecs=vp9'
const recorder = new MediaRecorder(stream, { mimeType : RECORDER_MIME_TYPE })
const mediaSource = new MediaSource()
videoElm.src = URL.createObjectURL(mediaSource)
mediaSource.onsourceopen = (e) => {
sourceBuffer = mediaSource.addSourceBuffer(RECORDER_MIME_TYPE);
const fr = new FileReader()
fr.onerror = console.log
fr.onload = ({ target }) => {
console.log(target.result)
sourceBuffer.appendBuffer(target.result)
}
recorder.ondataavailable = ({ data }) => {
console.log(data)
fr.readAsArrayBuffer(data)
}
setInterval(recorder.requestData.bind(recorder), 1000)
}
console.log('Recorder created')
recorder.start()
}
Do you know why it won't play the video?
I have created a fiddle with all the necessary code to try it out, the javascript tab is the same code as above, (the html is mostly irrelevant and dose not need to be changed)
Some try to reduce the latency, but I actually want to increase it to ~10 seconds to rewatch something you did wrong in a golf swing or something, and if possible avoid MediaRecorder altogether
EDIT:
I found something called "playout-delay" in some RTC extension
that allows the sender to control the minimum and maximum latency from capture to render time
https://webrtc.org/experiments/rtp-hdrext/playout-delay/
How can i use it?
Will it be of any help to me?
Update, there is new feature that will enable this, called playoutDelayHint.
We want to provide means for javascript applications to set their preferences on how fast they want to render audio or video data. As fast as possible might be beneficial for applications which concentrates on real time experience. For others additional data buffering may provide smother experience in case of network issues.
Refs:
https://discourse.wicg.io/t/hint-attribute-in-webrtc-to-influence-underlying-audio-video-buffering/4038
https://bugs.chromium.org/p/webrtc/issues/detail?id=10287
Demo: https://jsfiddle.net/rvekxns5/
doe i was only able to set max 10s in my browser but it's more up to the UA vendor to do it's best it can with the resources available
import('https://jimmy.warting.se/packages/dummycontent/canvas-clock.js')
.then(({AnalogClock}) => {
const {canvas} = new AnalogClock(100)
document.querySelector('canvas').replaceWith(canvas)
const [pc1, pc2] = localPeerConnectionLoop()
const canvasStream = canvas.captureStream(200)
videoA.srcObject = canvasStream
videoA.play()
pc1.addTransceiver(canvasStream.getTracks()[0], {
streams: [ canvasStream ]
})
pc2.onaddstream = (evt) => {
videoC.srcObject = evt.stream
videoC.play()
}
$dur.onchange = () => {
pc2.getReceivers()[0].playoutDelayHint = $dur.valueAsNumber
}
})
<!-- all the irrelevant part, that you don't need to know anything about -->
<h3 style="border-bottom: 1px solid">Original canvas</h3>
<canvas id="canvas" width="100" height="100"></canvas>
<script>
function localPeerConnectionLoop(cfg = {sdpSemantics: 'unified-plan'}) {
const setD = (d, a, b) => Promise.all([a.setLocalDescription(d), b.setRemoteDescription(d)]);
return [0, 1].map(() => new RTCPeerConnection(cfg)).map((pc, i, pcs) => Object.assign(pc, {
onicecandidate: e => e.candidate && pcs[i ^ 1].addIceCandidate(e.candidate),
onnegotiationneeded: async e => {
try {
await setD(await pc.createOffer(), pc, pcs[i ^ 1]);
await setD(await pcs[i ^ 1].createAnswer(), pcs[i ^ 1], pc);
} catch (e) {
console.log(e);
}
}
}));
}
</script>
<h3 style="border-bottom: 1px solid">Local peer (PC1)</h3>
<video id="videoA" muted width="100" height="100"></video>
<h3 style="border-bottom: 1px solid">Remote peer (PC2)</h3>
<video id="videoC" muted width="100" height="100"></video>
<label> Change playoutDelayHint
<input type="number" value="1" id="$dur">
</label>

How can I switch Web Audio Source Nodes without clicking?

I'm trying to write a small Audio library for a specific web application, to solve the problem of Web Audio Buffer Sources requiring long load times, I'm trying to switch a HTML5 Audio Source(via MediaElementSourceNode) with a Buffer Source once the Buffer Source is ready to play. with a 20 minute track, it takes Web Audio's Buffer Source roughly 5 seconds to decode and start playing.
using MediaElementSourceNode is required for using the PanNode in Web Audio
First, I thought it was a JS main thread Latency issue that was throwing off the "start Time". I thought I could solve by making sure the code that disables MediaElementSource and enables BufferSourceNode are as close to execution as possible.
Then, I though it must be the HTML having a small delay when it starts, causing the recorded startTime to be off, to get around this, I used a Event Handler listening for 'play'.
I searched around and discovered Gapless 5 apparently did this without issue, looking at its source code, i could not discovered how it is switching sources seamlessly
play(offset) {
this.createNodes();
this.connectNodes();
//if webAudio's buffer source is not ready, starting playing with HTML5
if (!this.audioClip.isWebAudioReady() &&
this.audioClip.playType > 0) {
this.playHTML5();
}
//returns true if buffer != null
if (!this.audioClip.isWebAudioReady()) {
this.audioClip.addDecodeListener(this.play.bind(this));
}
if (this.audioClip.isWebAudioReady()) {
this.playBufferSource();
}
playHTML5() {
var context = AudioManager.context();
if (this.audioClip.isHTML5Ready()) {
this.createHTMLSourceNode();
console.log("playing HTML5");
this.mediaElementSourceNode.connect(this.gainNode);
this.mediaElementSourceNode.source.play();
this.startTime = context.currentTime;
}
else {
console.log('not ready yet');
this.audioClip.addLoadListener(this.playHTML5.bind(this));
}
}
playBufferSource() {
var context = AudioManager.context();
var offset = context.currentTime - this.startTime;
if (!this.bufferSourceNode) {
this.createBufferSourceNode();
}
this.bufferSourceNode.connect(this.gainNode);
//hoplessly attempt to make up for Thread latencey
offset = context.currentTime - this.startTime;
if (this.audioClip.playType > 0) {
this.mediaElementSourceNode.disconnect();
this.mediaElementSourceNode = null;
}
if (this.audioClip.playType == 0) {
offset = 0;
this.bufferSourceNode.start(0, offset);
}
else {
offset = context.currentTime - this.startTime;
this.bufferSourceNode.start(0, offset);
}
// console.log("starting web audio at" + offset);
}
createBufferSourceNode() {
var context = AudioManager.context();
if (!this.audioClip.webAudioReady) {
console.log('Web Audio not ready!, Sometihng went wrong!');
return;
}
var buffer = this.audioClip.buffer;
this.bufferSourceNode = context.createBufferSource();
//When using anything other than Buffer,
//we want to disable pitching.
if (this.audioClip.playType == NS.PlayTypes.Buffer) {
this.bufferSourceNode.playbackRate.setValueAtTime(this._pitch,
context.currentTime);
}
this.bufferSourceNode.buffer = buffer;
}
createHTMLSourceNode() {
var context = AudioManager.context();
var HTMLAudio = this.audioClip.mediaElement.cloneNode(false);
//HTMLAudio.addEventListener('ended', onHTML5Ended.bind(this), false);
HTMLAudio.addEventListener('play', this.onHTML5Play.bind(this), false);
var sourceNode = context.createMediaElementSource(HTMLAudio);
sourceNode.source = HTMLAudio;
this.mediaElementSourceNode = sourceNode;
}
/**
*
*/
onHTML5Play() {
this.startTime = AudioManager.context().currentTime;
console.log("HTML5 started playing");
}
Since I'm starting the second source as close as possible in time with the first, I should technically not hear any clicks if the waveform line up close enough, but the resulting clicks are very audible, sometimes 2 clicks are audible.

WebAudio - seamlessly playing sequence of audio chunks

I have a live, constant source of waveform data that gives me a second of single-channel audio with constant sample rate every second. Currently I play them this way:
// data : Float32Array, context: AudioContext
function audioChunkReceived (context, data, sample_rate) {
var audioBuffer = context.createBuffer(2, data.length, sample_rate);
audioBuffer.getChannelData(0).set(data);
var source = context.createBufferSource(); // creates a sound source
source.buffer = audioBuffer;
source.connect(context.destination);
source.start(0);
}
Audio plays fine but with noticeable pauses between consecutive chunks being played (as expected). I'd like to get rid of them and I understand I'll have to introduce some kind of buffering.
Questions:
Is there a JS library that can do this for me? (I'm in the process of searching through them)
If there is no library that can do this, how should I do it myself?
Detecting when playback finished in one source and have another one ready to play it immediately afterwards? (using AudioBufferSourceNode.onended event handler)
Create one large buffer and copy my audio chunks one after another and control the flow using AudioBufferSourceNode.start AudioBufferSourceNode.stop functions?
Something different?
I've written a small class in TypeScript that serves as buffer for now. It has bufferSize defined for controlling how many chunks it can hold. It's short and self-descriptive so I'll paste it here. There is much to improve so any ideas are welcome.
( you can quickly convert it to JS using: https://www.typescriptlang.org/play/ )
class SoundBuffer {
private chunks : Array<AudioBufferSourceNode> = [];
private isPlaying: boolean = false;
private startTime: number = 0;
private lastChunkOffset: number = 0;
constructor(public ctx:AudioContext, public sampleRate:number,public bufferSize:number = 6, private debug = true) { }
private createChunk(chunk:Float32Array) {
var audioBuffer = this.ctx.createBuffer(2, chunk.length, this.sampleRate);
audioBuffer.getChannelData(0).set(chunk);
var source = this.ctx.createBufferSource();
source.buffer = audioBuffer;
source.connect(this.ctx.destination);
source.onended = (e:Event) => {
this.chunks.splice(this.chunks.indexOf(source),1);
if (this.chunks.length == 0) {
this.isPlaying = false;
this.startTime = 0;
this.lastChunkOffset = 0;
}
};
return source;
}
private log(data:string) {
if (this.debug) {
console.log(new Date().toUTCString() + " : " + data);
}
}
public addChunk(data: Float32Array) {
if (this.isPlaying && (this.chunks.length > this.bufferSize)) {
this.log("chunk discarded");
return; // throw away
} else if (this.isPlaying && (this.chunks.length <= this.bufferSize)) { // schedule & add right now
this.log("chunk accepted");
let chunk = this.createChunk(data);
chunk.start(this.startTime + this.lastChunkOffset);
this.lastChunkOffset += chunk.buffer.duration;
this.chunks.push(chunk);
} else if ((this.chunks.length < (this.bufferSize / 2)) && !this.isPlaying) { // add & don't schedule
this.log("chunk queued");
let chunk = this.createChunk(data);
this.chunks.push(chunk);
} else { // add & schedule entire buffer
this.log("queued chunks scheduled");
this.isPlaying = true;
let chunk = this.createChunk(data);
this.chunks.push(chunk);
this.startTime = this.ctx.currentTime;
this.lastChunkOffset = 0;
for (let i = 0;i<this.chunks.length;i++) {
let chunk = this.chunks[i];
chunk.start(this.startTime + this.lastChunkOffset);
this.lastChunkOffset += chunk.buffer.duration;
}
}
}
}
You don't show how audioChunkReceived, but to get seamless playback, you have to make sure you have the data before you want to play it and before the previous one stops playing.
Once you have this, you can schedule the newest chunk to start playing when the previous one ends by calling start(t), where t is the end time of the previous chunk.
However, if the buffer sample rate is different from the context.sampleRate, it's probably not going to play smoothly because of the resampling that is needed to convert the buffer to the context rate.
I think it is because you allocate your buffer for 2 channel.
change that to one.
context.createBuffer(2, data.length, sample_rate);
to
context.createBuffer(1, data.length, sample_rate);

How do I call the start() & stop() methods in this to create functional audio controls?

I have a method audioBufferSourceNode which holds the audio file that has been loaded.
on line 136 which is line 13 below the start() and stop() methods are being used on the audio node for other things, but I can't get it to ALSO do another thing. How do I call these methods to play and pause the audio. I don't know the correct way to call start() & stop() methods so that I have buttons or divs that play/ pause the audio and also how do you use those methods to have a volume slider and mute button. How would I go about doing it?
Side Note: I was told declaring the variable 'audioBufferSourceNode' globally would be better practice, but not sure how to or what they meant exactly or if that even has anything to do with my problem.
so on line 13 the start() and stop() methods are being used on the audio node.
_visualize: function(audioContext, buffer) {
var audioBufferSourceNode = audioContext.createBufferSource(),
analyser = audioContext.createAnalyser(),
that = this;
//connect the source to the analyser
audioBufferSourceNode.connect(analyser);
//connect the analyser to the destination(the speaker), or we won't hear the sound
analyser.connect(audioContext.destination);
//then assign the buffer to the buffer source node
audioBufferSourceNode.buffer = buffer;
//play the source
if (!audioBufferSourceNode.start) {
audioBufferSourceNode.start = audioBufferSourceNode.noteOn //in old browsers use noteOn method
audioBufferSourceNode.stop = audioBufferSourceNode.noteOff //in old browsers use noteOn method
};
//stop the previous sound if any
if (this.animationId !== null) {
cancelAnimationFrame(this.animationId);
}
if (this.source !== null) {
this.source.stop(0);
}
audioBufferSourceNode.start(0);
this.status = 1;
this.source = audioBufferSourceNode;
audioBufferSourceNode.onended = function() {
that._audioEnd(that);
};
this._updateInfo('Playing ' + this.fileName, false);
this.info = 'Playing ' + this.fileName;
document.getElementById('fileWrapper').style.opacity = 0.2;
this._drawSpectrum(analyser);
},
full code:
https://jsfiddle.net/4hty6kak/1/
Web Audio API doesn't work on jsfiddle, so here's a live working demo:
http://wayou.github.io/HTML5_Audio_Visualizer/
Do you have any simple solid solutions?

Categories

Resources