Web audio API, problem using the panNode, the sounds only play once - javascript

I am trying to integrate some panning effects to some sounds in a small testing app. It works fine except for one important issue: each sound only plays once!
I have tried several ways to attempt to bypass that issue without any success. The thing is, I can't pinpoint where the problem comes from. Here is my code, and a little explanation bellow.
const audio = new Audio('audio/background.mp3');
const footstep = new Audio('audio/footstep1.mp3');
const bumpWall1 = new Audio(`audio/bump-wall1.mp3`);
const bumpWall2 = new Audio(`audio/bump-wall2.mp3`);
const bumpWall3 = new Audio(`audio/bump-wall3.mp3`);
const bumpWall4 = new Audio(`audio/bump-wall4.mp3`);
const bumpWallArray = [bumpWall1, bumpWall2, bumpWall3, bumpWall4];
audio.volume = 0.5;
function play(sound, dir) {
let audioContext = new AudioContext();
let pre = document.querySelector('pre');
let myScript = document.querySelector('script');
let source = audioContext.createMediaElementSource(sound);
let panNode = audioContext.createStereoPanner();
source.connect(panNode);
panNode.connect(audioContext.destination);
if (dir === "left") {
panNode.pan.value = -0.8
} else if (dir === "right") {
panNode.pan.value = 0.8;
} else {
panNode.pan.value = 0;
}
sound.play();
}
So basically, when you call the play() function it plays the sound either on the left, the right, or the middle. But each sound is only played once. For example, if the footstep was played one time, it is never played again if I call the play() function on it.
Can anyone help me with that?

In your developer console, you should have a message stating something along the lines of
Uncaught InvalidStateError: Failed to execute 'createMediaElementSource' on 'AudioContext': HTMLMediaElement already connected previously to a different MediaElementSourceNode.
(At least in Chrome,) you can't connect a MediaElement several times to a MediaElementSourceNode.
To avoid this, you would have to disconnect this MediaElement from the MediaElementSourceNode, but this isn't possible...
The best in your situation is probably to use directly AudioBuffers rather than HTMLAudioElements, moreover if you don't append them in the doc.
let audio;
const sel = document.getElementById( 'sel' );
// create a single AudioContext, these are not small objects
const audioContext = new AudioContext();
fetch( 'https://dl.dropboxusercontent.com/s/agepbh2agnduknz/camera.mp3' ).then( resp => resp.arrayBuffer() )
.then( buf => audioContext.decodeAudioData( buf ) )
.then( audioBuffer => audio = audioBuffer )
.then( () => sel.disabled = false )
.catch( console.error );
function play(sound, dir) {
let source = audioContext.createBufferSource();
source.buffer = sound;
let panNode = audioContext.createStereoPanner();
source.connect( panNode );
panNode.connect( audioContext.destination );
if (dir === "left") {
panNode.pan.value = -0.8
} else if (dir === "right") {
panNode.pan.value = 0.8;
} else {
panNode.pan.value = 0;
}
source.start( 0 );
}
sel.onchange = evt => play( audio, sel.value );
<select id="sel" disabled>
<option>left</option>
<option>center</option>
<option>right</option>
</select>

Related

Slow Render/Delayed Render in Vanilla Javascript App

I have a couple issues - the app is a vanilla javascript app for school that maps audio to each alphabetical key. The issues are as follows:
It takes quite a while to load the audio - the app proceeds like normal but with no audio - it usually can run as intended after a few minutes or a few refreshes, but I do want to get rid of that period. I tried to fix that using (document.readyState === "interactive") and if (allAudio.entries(audio => (audio.readyState === 4)))
It doesn't take the first key down event to change the intro slides, it takes the second - but I'm also not sure how to fix this. The slides are in an array that goes through each item and then makes them display as none. I also tried to add the event listener for the keyboard earlier, but to no avail.
To look at the bugs yourself, live link is here: https://haeuncreative.github.io/mosatic/
relevant code:
if (document.readyState === "interactive") {
const allAudio = document.querySelectorAll("audio")
console.log(allAudio)
if (allAudio.entries(audio => (audio.readyState === 4)))
window.addEventListener('load', function() {
const keysDown = new KeyDownHandler()
const canvas = document.querySelector('canvas');
const context = canvas.getContext('2d');
context.fillStyle = '#967bb6'
context.fillRect(0, 0, 1000, 562.5)
const clickDown = new ClickHandler(keysDown)
}
);
}
export default class KeyDownHandler {
constructor() {
this.addPressListener()
// audio
this.soundBank = new AudioBank
this.soundBank.createBank(CONSTANTS.KEY_ALPHABET)
// visual // intro
this.intro1 = document.querySelector('#intro1')
this.intro2a = document.querySelector('#intro2a')
this.intro2b = document.querySelector('#intro2b')
this.intro2c = document.querySelector('#intro2c')
this.intro3 = document.querySelector('#intro3')
this.intro4 = document.querySelector('#intro4')
this.introBank = [
this.intro1,
this.intro2a,
this.intro2b,
this.intro2c,
this.intro3,
this.intro4
]
// visual // main
this.body = document.querySelector('body')
this.canvas = document.querySelector('canvas');
this.context = this.canvas.getContext('2d');
this.background_colors = ["#95c88c", "#967bb6", "#A7C7E7", "#FF6961"]
this.aniBank = new AniBank;
// recording/user interaction
this.keys = [];
this.durations = [];
this.recording = false;
}
introSwitch() {
this.soundBank.playSpace()
if (this.currentSlide) {
this.currentSlide.style.animation = "fadeOut 1s"
this.currentSlide.style.display = "none"
this.currentSlide = ""
}
if (!this.introBank.length) {
slide.style.display = "none"
this.introFinish = true
}
if (this.introBank.length) {
let slide = (this.introBank.shift())
slide.style.filter = "brightness(60%)"
console.log(slide)
if (slide === this.intro4 || !slide) {
this.canvas.style.display = "flex";
this.addKeyListeners()
}
if (slide.style.display = "none") {
slide.style.display = "flex"
slide.style.filter = "brightness(60%)"
}
this.currentSlide = slide;
slide.style.filter = "brightness(100%)"
}
}
addPressListener() {
window.addEventListener('keypress', e => {
e.preventDefault()
e.stopImmediatePropagation()
this.introSwitch()
if (this.currentSlide === this.intro4) {
this.currentSlide.style.display = "none"
window.removeEventListener("keypress", this.introSwitch)
this.addKeyListeners()
}
})
}`
It takes quite a while to load the audio - the app proceeds like normal but with no audio - it usually can run as intended after a few minutes or a few refreshes, but I do want to get rid of that period. I tried to fix that using (document.readyState === "interactive") and if (allAudio.entries(audio => (audio.readyState === 4)))
It doesn't take the first key down event to change the intro slides, it takes the second - but I'm also not sure how to fix this. The slides are in an array that goes through each item and then makes them display as none. I also tried to add the event listener for the keyboard earlier, but to no avail.

Weird (colorless) texture rendering in THREE.js under Node.js

Trying to render GLB file with THREE.js on Node.js but get wrong result.
Everything is running under Debian on WSL2 with mesa drivers installed (as instructed by headless-gl package description).
The app runs with xvfb-run -s "-ac -screen 0 1280x1024x24" node --trace-warnings -r esm app.js command.
I using JSDOM, canvas, node-canvas-webgl, xhr2 packages to emulate browser behavior to make THREE and GLTFLoader code running.
expected result - what I get
I also creating panorama with CubemapToEquirectangular but even without this package rendering works incorrect.
result in browser - result in node
This initial code helps me run everything I need.
const { JSDOM } = require('jsdom');
const { window } = (new JSDOM('<!doctype html><html><head><title></title></head><body></body></html>'));
global.window = window;
global.self = window;
global.document = window.document;
global.navigator = window.navigator;
global.File = window.File;
global.FileReader = window.FileReader;
global.Blob = window.Blob;
const createObjectURL = async object => {
if (object instanceof Blob) {
const convert = new Promise(resolve => {
const a = new FileReader();
a.onload = function(e) { resolve(e.target.result); };
a.readAsDataURL(object);
});
return await convert;
} else return typeof object === typeof '' ? object : '';
};
global.URL = window.URL;
global.URL.createObjectURL = createObjectURL;
global.URL.revokeObjectURL = url => null;
global.XMLHttpRequest = require('xhr2');
global.atob = require('atob');
const nodeCanvas = require('canvas');
global.ImageData = nodeCanvas.ImageData;
global.Image = nodeCanvas.Image;
global.Image.prototype.addEventListener = function(type, listener, useCapture) {
this['on' + type] = listener.bind(this);
};
global.Image.prototype.removeEventListener = function(type, listener, useCapture) {
this['on' + type] = null;
};
global.document.createElementNS = (uri, type) => {
if (type === 'img') {
const image = new global.Image();
return image;
}
return window.document.createElementNS(uri, type);
};
global.TextDecoder = require('util').TextDecoder;
I also use createCanvas from node-canvas-webgl to create canvas for renderer
const { createCanvas } = require('node-canvas-webgl/lib');
const canvas = createCanvas(512, 512);
const renderer = new THREE.WebGLRenderer({ canvas });
BTW node-canvas-webgl/lib/canvas.js is modified on line 108 to make it work correct
- if(pixels._image) pixels = pixels._image;
+ if(pixels && pixels._image) pixels = pixels._image;
The rest of the code is pretty much the same as in every THREE examples.
I also have to mention that Duck.glb renders correct, but no other big model I have tried.
duck is okay
Did someone faced this problem and is there any solution?

Get ReadableStream from Webcam in Browser

I would like to get webcam input as a ReadableStream in the browser to pipe to a WritableStream. I have tried using the MediaRecorder API, but that stream is chunked into separate blobs while I would like one continuous stream. I'm thinking the solution might be to pipe the MediaRecorder chunks to a unified buffer and read from that as a continuous stream, but I'm not sure how to get that intermediate buffer working.
mediaRecorder = new MediaRecorder(stream, recorderOptions);
mediaRecorder.ondataavailable = handleDataAvailable;
mediaRecorder.start(1000);
async function handleDataAvailable(event) {
if (event.data.size > 0) {
const data: Blob = event.data;
// I think I need to pipe to an intermediate stream? Not sure how tho
data.stream().pipeTo(writable);
}
}
Currently we can't really access the raw data of the MediaStream, the closest we have for video is the MediaRecorder API but this will encode the data and works by chunks not as a stream.
However, there is a new MediaCapture Transform W3C group working on a MediaStreamTrackProcessor interface doing exactly what you want and which is already available in Chrome under the chrome://flags/#enable-experimental-web-platform-features flag.
When reading the resulting stream and depending on which kind of track you passed, you'll gain access to VideoFrames or AudioFrames which are being added by the new WebCodecs API.
if( window.MediaStreamTrackProcessor ) {
const track = getCanvasTrack();
const processor = new MediaStreamTrackProcessor( track );
const reader = processor.readable.getReader();
readChunk();
function readChunk() {
reader.read().then( ({ done, value }) => {
// value is a VideoFrame
// we can read the data in each of its planes into an ArrayBufferView
const channels = value.planes.map( (plane) => {
const arr = new Uint8Array(plane.length);
plane.readInto(arr);
return arr;
});
value.close(); // close the VideoFrame when we're done with it
log.textContent = "planes data (15 first values):\n" +
channels.map( (arr) => JSON.stringify( [...arr.subarray(0,15)] ) ).join("\n");
if( !done ) {
readChunk();
}
});
}
}
else {
console.error("your browser doesn't support this API yet");
}
function getCanvasTrack() {
// just some noise...
const canvas = document.getElementById("canvas");
const ctx = canvas.getContext("2d");
const img = new ImageData(300, 150);
const data = new Uint32Array(img.data.buffer);
const track = canvas.captureStream().getVideoTracks()[0];
anim();
return track;
function anim() {
for( let i=0; i<data.length;i++ ) {
data[i] = Math.random() * 0xFFFFFF + 0xFF000000;
}
ctx.putImageData(img, 0, 0);
if( track.readyState === "live" ) {
requestAnimationFrame(anim);
}
}
}
<pre id="log"></pre>
<p>
Source<br>
<canvas id="canvas"></canvas>
</p>

How to Switch Video Cameras Using WebRTC

I am currently working on WebRTC multipeer connection. I want to be able to switch the camera that is being used in the middle of a call, without having to change the selected camera in Settings.
I followed along with the code from this RTC example, and it works, but only client side.
devices.js
'use strict';
const videoElement = document.querySelector('#local');
const audioInputSelect = document.querySelector('select#audioSource');
const audioOutputSelect = document.querySelector('select#audioOutput');
const videoSelect = document.querySelector('select#videoSource');
const selectors = [audioInputSelect, audioOutputSelect, videoSelect];
audioOutputSelect.disabled = !('sinkId' in HTMLMediaElement.prototype);
function gotDevices(deviceInfos) {
// Handles being called several times to update labels. Preserve values.
const values = selectors.map(select => select.value);
selectors.forEach(select => {
while (select.firstChild) {
select.removeChild(select.firstChild);
}
});
for (let i = 0; i !== deviceInfos.length; ++i) {
const deviceInfo = deviceInfos[i];
const option = document.createElement('option');
option.value = deviceInfo.deviceId;
if (deviceInfo.kind === 'audioinput') {
option.text = deviceInfo.label || `microphone ${audioInputSelect.length + 1}`;
audioInputSelect.appendChild(option);
} else if (deviceInfo.kind === 'audiooutput') {
option.text = deviceInfo.label || `speaker ${audioOutputSelect.length + 1}`;
audioOutputSelect.appendChild(option);
} else if (deviceInfo.kind === 'videoinput') {
option.text = deviceInfo.label || `camera ${videoSelect.length + 1}`;
videoSelect.appendChild(option);
} else {
console.log('Some other kind of source/device: ', deviceInfo);
}
}
selectors.forEach((select, selectorIndex) => {
if (Array.prototype.slice.call(select.childNodes).some(n => n.value === values[selectorIndex])) {
select.value = values[selectorIndex];
}
});
}
navigator.mediaDevices.enumerateDevices().then(gotDevices).catch(handleError);
// Attach audio output device to video element using device/sink ID.
function attachSinkId(element, sinkId) {
if (typeof element.sinkId !== 'undefined') {
element.setSinkId(sinkId)
.then(() => {
console.log(`Success, audio output device attached: ${sinkId}`);
})
.catch(error => {
let errorMessage = error;
if (error.name === 'SecurityError') {
errorMessage = `You need to use HTTPS for selecting audio output device: ${error}`;
}
console.error(errorMessage);
// Jump back to first output device in the list as it's the default.
audioOutputSelect.selectedIndex = 0;
});
} else {
console.warn('Browser does not support output device selection.');
}
}
function changeAudioDestination() {
const audioDestination = audioOutputSelect.value;
attachSinkId(videoElement, audioDestination);
}
function gotStream(stream) {
window.stream = stream; // make stream available to console
videoElement.srcObject = stream;
// Refresh button list in case labels have become available
return navigator.mediaDevices.enumerateDevices();
}
function handleError(error) {
console.log('navigator.MediaDevices.getUserMedia error: ', error.message, error.name);
}
function start() {
if (window.stream) {
window.stream.getTracks().forEach(track => {
track.stop();
});
}
const audioSource = audioInputSelect.value;
const videoSource = videoSelect.value;
const constraints = {
audio: {deviceId: audioSource ? {exact: audioSource} : undefined},
video: {deviceId: videoSource ? {exact: videoSource} : undefined}
};
navigator.mediaDevices.getUserMedia(constraints).then(gotStream).then(gotDevices).catch(handleError);
}
audioInputSelect.onchange = start;
audioOutputSelect.onchange = changeAudioDestination;
videoSelect.onchange = start;
start();
Is there an easy way to do this? I think it would have something to do with tracks, not really sure as I just started working with WebRTC.
If you want to view the full code for the repository, click here
Thanks!
To switch cameras, you must release the first camera's MediaStream by stopping all its tracks, then you must use getUserMedia() to get another MediaStream for the other camera. The browser won't prompt your user for permission again in this case; the camera will just switch. As you stop the tracks, call .removeTrack() on your rtcPeerConnection. Then, with the new stream's tracks, call .addTrack().
You may already know this, but enumerateDevices() returns much more useful information if you have an open MediaStream. That's because the user has granted permission.
If you want to replace the video sent to the remote end, you need to call RTCPeerConnection.replaceTrack. As usual, mdn has a good example

I'm capturing screen by using media recorder and making video from blob but that video is not showing it's duration [duplicate]

I am in the process of replacing RecordRTC with the built in MediaRecorder for recording audio in Chrome. The recorded audio is then played in the program with audio api. I am having trouble getting the audio.duration property to work. It says
If the video (audio) is streamed and has no predefined length, "Inf" (Infinity) is returned.
With RecordRTC, I had to use ffmpeg_asm.js to convert the audio from wav to ogg. My guess is somewhere in the process RecordRTC sets the predefined audio length. Is there any way to set the predefined length using MediaRecorder?
This is a chrome bug.
FF does expose the duration of the recorded media, and if you do set the currentTimeof the recorded media to more than its actual duration, then the property is available in chrome...
var recorder,
chunks = [],
ctx = new AudioContext(),
aud = document.getElementById('aud');
function exportAudio() {
var blob = new Blob(chunks);
aud.src = URL.createObjectURL(new Blob(chunks));
aud.onloadedmetadata = function() {
// it should already be available here
log.textContent = ' duration: ' + aud.duration;
// handle chrome's bug
if (aud.duration === Infinity) {
// set it to bigger than the actual duration
aud.currentTime = 1e101;
aud.ontimeupdate = function() {
this.ontimeupdate = () => {
return;
}
log.textContent += ' after workaround: ' + aud.duration;
aud.currentTime = 0;
}
}
}
}
function getData() {
var request = new XMLHttpRequest();
request.open('GET', 'https://upload.wikimedia.org/wikipedia/commons/4/4b/011229beowulf_grendel.ogg', true);
request.responseType = 'arraybuffer';
request.onload = decodeAudio;
request.send();
}
function decodeAudio(evt) {
var audioData = this.response;
ctx.decodeAudioData(audioData, startRecording);
}
function startRecording(buffer) {
var source = ctx.createBufferSource();
source.buffer = buffer;
var dest = ctx.createMediaStreamDestination();
source.connect(dest);
recorder = new MediaRecorder(dest.stream);
recorder.ondataavailable = saveChunks;
recorder.onstop = exportAudio;
source.start(0);
recorder.start();
log.innerHTML = 'recording...'
// record only 5 seconds
setTimeout(function() {
recorder.stop();
}, 5000);
}
function saveChunks(evt) {
if (evt.data.size > 0) {
chunks.push(evt.data);
}
}
// we need user-activation
document.getElementById('button').onclick = function(evt){
getData();
this.remove();
}
<button id="button">start</button>
<audio id="aud" controls></audio><span id="log"></span>
So the advice here would be to star the bug report so that chromium's team takes some time to fix it, even if this workaround can do the trick...
Thanks to #Kaiido for identifying bug and offering the working fix.
I prepared an npm package called get-blob-duration that you can install to get a nice Promise-wrapped function to do the dirty work.
Usage is as follows:
// Returns Promise<Number>
getBlobDuration(blob).then(function(duration) {
console.log(duration + ' seconds');
});
Or ECMAScript 6:
// yada yada async
const duration = await getBlobDuration(blob)
console.log(duration + ' seconds')
A bug in Chrome, detected in 2016, but still open today (March 2019), is the root cause behind this behavior. Under certain scenarios audioElement.duration will return Infinity.
Chrome Bug information here and here
The following code provides a workaround to avoid the bug.
Usage : Create your audioElement, and call this function a single time, providing a reference of your audioElement. When the returned promise resolves, the audioElement.duration property should contain the right value. ( It also fixes the same problem with videoElements )
/**
* calculateMediaDuration()
* Force media element duration calculation.
* Returns a promise, that resolves when duration is calculated
**/
function calculateMediaDuration(media){
return new Promise( (resolve,reject)=>{
media.onloadedmetadata = function(){
// set the mediaElement.currentTime to a high value beyond its real duration
media.currentTime = Number.MAX_SAFE_INTEGER;
// listen to time position change
media.ontimeupdate = function(){
media.ontimeupdate = function(){};
// setting player currentTime back to 0 can be buggy too, set it first to .1 sec
media.currentTime = 0.1;
media.currentTime = 0;
// media.duration should now have its correct value, return it...
resolve(media.duration);
}
}
});
}
// USAGE EXAMPLE :
calculateMediaDuration( yourAudioElement ).then( ()=>{
console.log( yourAudioElement.duration )
});
Thanks #colxi for the actual solution, I've added some validation steps (As the solution was working fine but had problems with long audio files).
It took me like 4 hours to get it to work with long audio files turns out validation was the fix
function fixInfinity(media) {
return new Promise((resolve, reject) => {
//Wait for media to load metadata
media.onloadedmetadata = () => {
//Changes the current time to update ontimeupdate
media.currentTime = Number.MAX_SAFE_INTEGER;
//Check if its infinite NaN or undefined
if (ifNull(media)) {
media.ontimeupdate = () => {
//If it is not null resolve the promise and send the duration
if (!ifNull(media)) {
//If it is not null resolve the promise and send the duration
resolve(media.duration);
}
//Check if its infinite NaN or undefined //The second ontime update is a fallback if the first one fails
media.ontimeupdate = () => {
if (!ifNull(media)) {
resolve(media.duration);
}
};
};
} else {
//If media duration was never infinity return it
resolve(media.duration);
}
};
});
}
//Check if null
function ifNull(media) {
if (media.duration === Infinity || media.duration === NaN || media.duration === undefined) {
return true;
} else {
return false;
}
}
//USAGE EXAMPLE
//Get audio player on html
const AudioPlayer = document.getElementById('audio');
const getInfinity = async () => {
//Await for promise
await fixInfinity(AudioPlayer).then(val => {
//Reset audio current time
AudioPlayer.currentTime = 0;
//Log duration
console.log(val)
})
}
I wrapped the webm-duration-fix package to solve the webm length problem, which can be used in nodejs and web browsers to support video files over 2GB with not too much memory usage.
Usage is as follows:
import fixWebmDuration from 'webm-duration-fix';
const mimeType = 'video/webm\;codecs=vp9';
const blobSlice: BlobPart[] = [];
mediaRecorder = new MediaRecorder(stream, {
mimeType
});
mediaRecorder.ondataavailable = (event: BlobEvent) => {
blobSlice.push(event.data);
}
mediaRecorder.onstop = async () => {
// fix blob, support fix webm file larger than 2GB
const fixBlob = await fixWebmDuration(new Blob([...blobSlice], { type: mimeType }));
// to write locally, it is recommended to use fs.createWriteStream to reduce memory usage
const fileWriteStream = fs.createWriteStream(inputPath);
const blobReadstream = fixBlob.stream();
const blobReader = blobReadstream.getReader();
while (true) {
let { done, value } = await blobReader.read();
if (done) {
console.log('write done.');
fileWriteStream.close();
break;
}
fileWriteStream.write(value);
value = null;
}
blobSlice = [];
};
//If you want to modify the video file completely, you can use this package "webmFixDuration", Other methods are applied at the display level only on the video tag With this method, the complete video file is modified
webmFixDuration github example
mediaRecorder.onstop = async () => {
const duration = Date.now() - startTime;
const buggyBlob = new Blob(mediaParts, { type: 'video/webm' });
const fixedBlob = await webmFixDuration(buggyBlob, duration);
displayResult(fixedBlob);
};

Categories

Resources