Sound Lag issue on mousedown in javascript - javascript

I am making a website which is a simple piano. I hosted the site, but the audio is lagging now. When I play it from my local system it works fine, but not from the hosted site.
So I want all the audio resources to load and save it in the user's device (as cache or something else) and play the audio from it so the audio won't lag. I can't figure out how do I do it. please help me with a solution.
This is what I tried:
const pianoKeys = document.querySelectorAll('.key');
function playSound(soundUrl){
const sound = new Audio(soundUrl)
sound.currentTime = 0
sound.play()
}
pianoKeys.forEach((pianoKey, i)=> {
const num = i<9 ? '0'+(i+1) : (i+1);
const soundUrl = 'sounds/key'+num+'.ogg';
pianoKey.addEventListener('mousedown', ()=>playSound(soundUrl))
})
The soundUrl creates the directory to the audio file. Is there any way to load the file and play from it?

Related

Does R Shiny have the ability to start playing audio from a certain time stamp?

I'm building a Shiny app where users will be able to play snippets of an audio file. The time stamps are from a JSON file to mark each sentence in the audio file. To play the audio I was originally using runjs() like so:
in my input function:
tags$audio(id = "audio", controls = NA, autoplay = NA, src = "")
and in the server function:
observeEvent(input$select.file, {
runjs(sprintf("document.getElementById('audio').src = '%s';", input$select.file))
})
but I think this will not work for playing a certain segment of the audio. I have been looking at RStudio resources like this one on playing audio, but I haven't found anything showing how to play a section of the audio that doesn't necessarily start at the beginning.
This should work to skip e.g. the first 30 seconds:
observeEvent(input$select.file, {
runjs(sprintf("var myaudio = document.getElementById('audio');
myaudio.src = '%s';
myaudio.currentTime = 30;", input$select.file))
})

Audio won't play anymore on browser after recording it with MediaRecorder

(See https://github.com/norbjd/wavesurfer-upload-and-record for a minimal reproducible example).
I'm using wavesurfer.js to display audio uploaded by the user as a waveform, and I'm trying to add a feature for recording a part of the audio uploaded.
So I've created a "Record" button (for now recording only 5 seconds of the audio) with the following code when clicking on it. I'm using MediaRecorder API :
document
.querySelector('[data-action="record"]')
.addEventListener('click', () => {
// re-use audio context from wavesurfer instead of creating a new one
const audioCtx = wavesurfer.backend.getAudioContext();
const dest = audioCtx.createMediaStreamDestination();
const audioStream = dest.stream;
audioCtx.createMediaElementSource(audio).connect(dest);
const chunks = [];
const rec = new MediaRecorder(audioStream);
rec.ondataavailable = (e) => {
chunks.push(e.data);
}
rec.onstop = () => {
const blob = new Blob(chunks, { type: "audio/ogg" });
const a = document.createElement("a");
a.download = "export.ogg";
a.href = URL.createObjectURL(blob);
a.textContent = "export the audio";
a.click();
window.URL.revokeObjectURL(a.href);
}
wavesurfer.play();
rec.start();
setTimeout(() => {
rec.stop();
wavesurfer.stop();
}, 5 * 1000);
});
When clicking on the button for recording, the wavesurfer should play (wavesurfer.play()) but I can't hear anything from my browser (but I can see the cursor move). At the end of the recording (5 seconds, set with setTimeout), I can download the recorded audio (rec.onstop function) and the sound plays correctly in VLC or any other media player.
However, I can't play audio anymore on the webpage via the browser. I can still record audio, and recorded audio can be downloaded and played correctly.
I'm wondering why audio won't play on the browser after clicking on the "Record" button for the first time. I think that this line :
audioCtx.createMediaElementSource(audio).connect(dest);
is the issue, but without it, I can't record audio.
I've also tried to recreate a new AudioContext instead of using wavesurfer's one :
const audioCtx = new AudioContext();
but it does not work better (same issue).
I've reproduced the issue in a minimal reproducible example : https://github.com/norbjd/wavesurfer-upload-and-record, so feel free to check it. Any help will be welcomed !
You don't need a separate audiocontext, but you need a MediaStreamDestination that you create using the same audiocontext (from wavesurfer.js in your case) as for the audionode you want to record, and you need to connect the audionode to that destination.
You can see a complete example of capturing audio and screen video here:
https://github.com/petersalomonsen/javascriptmusic/blob/master/wasmaudioworklet/screenrecorder/screenrecorder.js
( connecting the audionode to record is done after the recording has started on line 52 )
and you can test it live here: https://petersalomonsen.com/webassemblymusic/livecodev2/?gist=c3ad6c376c23677caa41eb79dddb5485
(Toggle the capture checkbox to start recording and press the play button to start the music, toggle the capture checkbox again to stop the recording).
and you can see the actual recording being done on this video: https://youtu.be/FHST7rLxhLM
as you can see in that example, it is still possible to play audio after the recording is finished.
Note that this example has only been tested for Chrome and Firefox.
And specifically for your case with wavesurfer:
Instead of just backend: 'MediaElement', switch to backend: 'MediaElementWebAudio',
and instead of audioCtx.createMediaElementSource(audio).connect(dest);, you can change to wavesurfer.backend.sourceMediaElement.connect(dest); to reuse the existing source from wavesurfer (but also works without this).

How to play media files sequentially without visible break?

I've worded my title, and tags in a way that should be searchable for both video and audio, as this question isn't specific to one. My specific case only concerns audio though, so my question body will be written specific to that.
First, the big picture:
I'm sending audio to multiple P2P clients who will connect and disconnect a random intervals. The audio I'm sending is a stream, but each client only needs the part of the stream from whence they connected. Here's how I solved that:
Every {timeout} (e.g. 1000ms), create a new audio blob
Blob will be a full audio file, with all metadata it needs to be playable
As soon as a blob is created, convert to array buffer (better browser support), and upload to client over WebRTC (or WebSockets if they don't support)
That works well. There is a delay, but if you keep the timeout low enough, it's fine.
Now, my question:
How can I play my "stream" without having any audible delay?
I say stream, but I didn't implement it using the Streams API, it is a queue of blobs, that gets updated every time the client gets new data.
I've tried a lot of different things like:
Creating a BufferSource, and merging two blobs (converted to audioBuffers) then playing that
Passing an actual stream from Stream API to clients instead of blobs
Playing blobs sequentially, relying on ended event
Loading next blob while current blob is playing
Each has problems, difficulties, or still results in an audible delay.
Here's my most recent attempt at this:
let firstTime = true;
const chunks = [];
Events.on('audio-received', ({ detail: audioChunk }) => {
chunks.push(audioChunk);
if (firstTime && chunks.length > 2) {
const currentAudio = document.createElement("audio");
currentAudio.controls = true;
currentAudio.preload = 'auto';
document.body.appendChild(currentAudio);
currentAudio.src = URL.createObjectURL(chunks.shift());
currentAudio.play();
const nextAudio = document.createElement("audio");
nextAudio.controls = true;
nextAudio.preload = 'auto';
document.body.appendChild(nextAudio);
nextAudio.src = URL.createObjectURL(chunks.shift());
let currentAudioStartTime, nextAudioStartTime;
currentAudio.addEventListener("ended", () => {
nextAudio.play()
nextAudioStartTime = new Date();
if (chunks.length) {
currentAudio.src = URL.createObjectURL(chunks.shift());
}
});
nextAudio.addEventListener("ended", () => {
currentAudio.play()
currentAudioStartTime = new Date();
console.log(currentAudioStartTime - nextAudioStartTime)
if (chunks.length) {
nextAudio.src = URL.createObjectURL(chunks.shift());
}
});
firstTime = false;
}
});
The audio-received event gets called every ~1000ms. This code works; it plays each "chunk" after the last one was played, but on Chrome, there is a ~300ms delay that's very audible. It plays the first chunk, then goes quiet, then plays the second, so on. On Firefox the delay is 50ms.
Can you help me?
I can try to create a reproducible example if that would help.

WebRTC MediaRecorder on remote stream cuts when the stream hangs

The Problem:
During a WebRTC unicast video conference, I can successfully stream video from a mobile device's webcam to a laptop/desktop. I would like to record the remote stream on the laptop/desktop side. (The setup is that a mobile device streams to a laptop/desktop).
However, it is usual for the video stream to hang from time to time. That's not a problem, for the "viewer" side will catch up. However, the recording of the remote stream will stop at the first hang.
Minimal and Removed Implementation (Local Recording):
I can successfully record the local stream from navigator.mediaDevices.getUserMedia() as follows:
const recordedChunks = [];
navigator.mediaDevices.getUserMedia({
video: true,
audio: false
}).then(stream => {
const localVideoElement = document.getElementById('local-video');
localVideoElement.srcObject = stream;
return stream;
}).then(stream => {
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = (event) => {
if(event.data && event.data.size > 0) {
recordedChunks.push(event.data);
}
};
mediaRecorder.start({ mimeType: 'video/webm;codecs=vp9' }, 10);
});
I can download this quite easily as follows:
const blob = new Blob(recordedChunks, { type: 'video/webm' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
document.body.appendChild(a);
a.style = 'display: none';
a.href = url;
a.download = 'test.webm';
a.click();
window.URL.revokeObjectURL(url);
Minimal and Removed Implementation (Remote Recording):
The setup I am using requires recording the remote stream, not the local stream, for IOS Safari does not support the MediaRecorder API. I included the above to show that the recording is working on the local side. The implementation of the remote stream recording is no different except I manually add a 0 Hz audio track to the video, for Chrome appears to have a bug where it won't record without an audio track.
const mediaStream = new MediaStream();
const audioContext = new AudioContext();
const destinationNode = audioContext.createMediaStreamDestination();
const oscillatorNode = audioContext.createOscillator();
oscillatorNode.frequency.setValueAtTime(0, audioContext.currentTime);
oscillatorNode.connect(destinationNode);
const audioTrack = destinationNode.stream.getAudioTracks()[0];
const videoTrack = remoteStream.getVideoTracks()[0]; // Defined somewhere else.
mediaStream.addTrack(videoTrack);
mediaStream.addTrack(audioTrack);
And then I perform the exact same operations that I do on the local stream example above to record the mediaStream variable.
As mentioned, at the first point where the remote stream hangs (due to network latency, perhaps), the remote recording ceases, such that on download, the duration of the .webm file converted to .mp4, via ffmpeg, is only as long as to where the first hang occurred.
Attempts to Mitigate:
One attempt to mitigate this issue I have tried is, rather than recording the remote stream that is attained in the callback for the ontrack event from WebRTC, I use the video stream from the remote video element instead, via remoteVideoElement.captureStream(). This does not work to fix the issue.
Any help would be much appreciated. Thank you.
Hopefully, someone is able to post an actual fix for you. In the mean time, a nasty, inefficient, totally-not-recommended workaround:
Route the incoming MediaStream to a video element.
Use requestAnimationFrame() to schedule drawing frames to a canvas. (Note that this removes any sense of genlock from the original video, and is not something you want to do. Unfortunately, we don't have a way of knowing when incoming frames occur, as far as I know.)
Use CanvasCaptureMediaStream as the video source.
Recombine the video track from CanvasCaptureMediaStream along with the audio track from the original MediaStream in a new MediaStream.
Use this new MediaStream for MediaRecorder.
I've done this with past projects where I needed to programatically manipulate the audio and video. It works!
One big caveat is that there's a bug in Chrome where even though a capture stream is attached to a canvas, the canvas won't be updated if the tab isn't active/visible. And, of course, requestAnimationFrame is severely throttled at best if the tab isn't active, so you need another frame clock source. (I used audio processors, ha!)

Is it possible to merge multiple webm blobs/clips into one sequential video clientside?

I already looked at this question -
Concatenate parts of two or more webm video blobs
And tried the sample code here - https://developer.mozilla.org/en-US/docs/Web/API/MediaSource -- (without modifications) in hopes of transforming the blobs into arraybuffers and appending those to a sourcebuffer for the MediaSource WebAPI, but even the sample code wasn't working on my chrome browser for which it is said to be compatible.
The crux of my problem is that I can't combine multiple blob webm clips into one without incorrect playback after the first time it plays. To go straight to the problem please scroll to the line after the first two chunks of code, for background continue reading.
I am designing a web application that allows a presenter to record scenes of him/herself explaining charts and videos.
I am using the MediaRecorder WebAPI to record video on chrome/firefox. (Side question - is there any other way (besides flash) that I can record video/audio via webcam & mic? Because MediaRecorder is not supported on not Chrome/Firefox user agents).
navigator.mediaDevices.getUserMedia(constraints)
.then(gotMedia)
.catch(e => { console.error('getUserMedia() failed: ' + e); });
function gotMedia(stream) {
recording = true;
theStream = stream;
vid.src = URL.createObjectURL(theStream);
try {
recorder = new MediaRecorder(stream);
} catch (e) {
console.error('Exception while creating MediaRecorder: ' + e);
return;
}
theRecorder = recorder;
recorder.ondataavailable =
(event) => {
tempScene.push(event.data);
};
theRecorder.start(100);
}
function finishRecording() {
recording = false;
theRecorder.stop();
theStream.getTracks().forEach(track => { track.stop(); });
while(tempScene[0].size != 1) {
tempScene.splice(0,1);
}
console.log(tempScene);
scenes.push(tempScene);
tempScene = [];
}
The function finishRecording gets called and a scene (an array of blobs of mimetype 'video/webm') gets saved to the scenes array. After it gets saved. The user can then record and save more scenes via this process. He can then view a certain scene using this following chunk of code.
function showScene(sceneNum) {
var sceneBlob = new Blob(scenes[sceneNum], {type: 'video/webm; codecs=vorbis,vp8'});
vid.src = URL.createObjectURL(sceneBlob);
vid.play();
}
In the above code what happens is the blob array for the scene gets turning into one big blob for which a url is created and pointed to by the video's src attribute, so -
[blob, blob, blob] => sceneBlob (an object, not array)
Up until this point everything works fine and dandy. Here is where the issue starts
I try to merge all the scenes into one by combining the blob arrays for each scene into one long blob array. The point of this functionality is so that the user can order the scenes however he/she deems fit and so he can choose not to include a scene. So they aren't necessarily in the same order as they were recorded in, so -
scene 1: [blob-1, blob-1] scene 2: [blob-2, blob-2]
final: [blob-2, blob-2, blob-1, blob-1]
and then I make a blob of the final blob array, so -
final: [blob, blob, blob, blob] => finalBlob
The code is below for merging the scene blob arrays
function mergeScenes() {
scenes[scenes.length] = [];
for(var i = 0; i < scenes.length - 1; i++) {
scenes[scenes.length - 1] = scenes[scenes.length - 1].concat(scenes[i]);
}
mergedScenes = scenes[scenes.length - 1];
console.log(scenes[scenes.length - 1]);
}
This final scene can be viewed by using the showScene function in the second small chunk of code because it is appended as the last scene in the scenes array. When the video is played with the showScene function it plays all the scenes all the way through. However, if I press play on the video after it plays through the first time, it only plays the last scene.
Also, if I download and play the video through my browser, the first time around it plays correctly - the subsequent times, I see the same error.
What am I doing wrong? How can I merge the files into one video containing all the scenes? Thank you very much for your time in reading this and helping me, and please let me know if I need to clarify anything.
I am using a element to display the scenes
The file's headers (metadata) should only be appended to the first chunk of data you've got.
You can't make an new video file by just pasting one after the other, they've got a structure.
So how to workaround this ?
If I understood correctly your problem, what you need is to be able to merge all the recorded videos, just like if it were only paused.
Well this can be achieved, thanks to the MediaRecorder.pause() method.
You can keep the stream open, and simply pause the MediaRecorder. At each pause event, you'll be able to generate a new video containing all the frames from the beginning of the recording, until this event.
Here is an external demo because stacksnippets don't works well with gUM...
And if ever you needed to also have shorter videos from between each resume and pause events, you could simply create new MediaRecorders for these smaller parts, while keeping the big one running.

Categories

Resources