Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 9 days ago.
Improve this question
I've been encountering performance issues, maybe even a bug on Chrome (Latest, stable). I have an audiovisuals app, which uses RequestAnimationFrame() and Web Audio API. It pushes a new vector path into an element to display the visuals.
When I play music and run the analyzer, the analyzer starts to lag terribly.
Chromes performance analyzer doesn't report anything unusual, it claims 13~ms per frame, which is 75fps - my monitors refresh rate, and tracking FPS inside the RequestAnimationFrame() returns 75fps too.
I noticed, that when I muted the tab, suddenly the FPS returned to expected behavior.
I tried multiple Chromium based browsers(Brave, OperaGX, Edge) and Firefox, they all performed as expected, except Chrome.
TL;DR
Page Muted: Canvas draws fine.
Page Unmuted: Canvas visually lags, but no performance monitors report this.
Video Proof - Top right is Chrome.
Demo website: https://chrome-bug-muted-window.djkato.net/
Demo source code
/**
* Initialise web audio api
*/
let audio_context
const audio_element = document.querySelector("#audio")
audio_element.src = "Jamie xx - Sleep Sound.mp3"
//audio nodes
let track
let audio_context_analyzer
let prev_data = new Array()
function main() {
audio_context = new AudioContext()
track = audio_context.createMediaElementSource(audio_element)
audio_context_analyzer = audio_context.createAnalyser(audio_element)
audio_context_analyzer.fftSize = 1024
audio_context_analyzer.smoothingTimeConstant = .2
track.connect(audio_context_analyzer).connect(audio_context.destination)
audio_element.play()
animate()
}
function animate() {
const svg_canvas = document.querySelector("#svgCanvas")
let initial_shape = new Array()
for (let i = 0; i < 200; i++) {
initial_shape.push({
x: (svg_canvas.viewBox.baseVal.width / 200) * i,
y: svg_canvas.viewBox.baseVal.height / 2 - (0 / 200) * i
})
}
let fft_data_array = new Float32Array(200)
audio_context_analyzer.getFloatFrequencyData(fft_data_array)
/**
* mutate svg default shape by audio
*/
let mutated_shape = new Array()
for (let i = 0; i < fft_data_array.length; i++) {
mutated_shape.push({
x: (initial_shape[i].x /** ((Math.max(this.#FFTDataArray[i] + 100)) * 4)*/),
y: (initial_shape[i].y - Math.min(initial_shape[i].y, Math.max(fft_data_array[i] * 2 + 200, 0)))
})
}
/**
* create svg element
*/
let path = `M ${0} ${svg_canvas.viewBox.baseVal.height} `
for (let i = 0; i < mutated_shape.length; i++) {
path += `L ${mutated_shape[i].x},${mutated_shape[i].y} `
}
path += `L ${svg_canvas.viewBox.baseVal.height} ${svg_canvas.viewBox.baseVal.height / 2} `
path += `L ${svg_canvas.viewBox.baseVal.height} ${svg_canvas.viewBox.baseVal.height} `
path += `Z `
path = `<path width="100%" height="100%" d="${path}" stroke="none" fill="#c084fc"/>`
svg_canvas.innerHTML = path
const drawVisual = requestAnimationFrame(animate)
}
Related
We have a ticker in the bottom of an HTML page that shows news, just like a news ticker on TV. We currently use requestAnimationFrame for this but we experience that it does not always animate well and I wonder if anyone knows what would be the best practice for this.
Firstly, we tried running it on a Raspberry Pi and it couldn't render it well no matter what we tried.
Secondly, we are running it in a Windows PC in a Chrome browser and generally it works well, but if we play a high resolution video on the same page, then the rendering of the ticker becomes really laggy.
I understand that it's competing for resources but animating a ticker should in principle not require many resources so I think there should be a solution for this.
Here is our current code. It's typescript and not plain javascript, but my question is not so much about troubleshooting details in my current code but more about finding the best approach which I assume is something different than what we are doing now:
private startTicker() {
const ticker = document.getElementById(`RssTickerContainer_${this.props.rssTicker?.rssTicker?.url}`);
if (!ticker || !this.state.items || this.state.items.length === 0) {
setTimeout(() => {
if (this.props.rssTicker?.rssTicker?.url) {
// try to fetch rss content again after 1 second
this.getRssFeed(this.props.rssTicker?.rssTicker?.url, true);
}
}, 1000);
return;
}
console.log("Starting RSS ticker");
const startPosition = this.state.leftPosition;
let leftPosition = startPosition;
let lastFrameCalled = performance.now();
let fps = 0;
this.animationFrameID = requestAnimationFrame(() => this.animateTicker(lastFrameCalled, fps, startPosition, leftPosition, ticker));
}
private animateTicker(lastFrameCalled: number, fps: number, startPosition: number, leftPosition: number, ticker: HTMLElement) {
// calculate fps
const timeSinceLastFrame = (performance.now() - lastFrameCalled) / 1000;
lastFrameCalled = performance.now();
fps = 1 / timeSinceLastFrame;
const containerWidth = ticker.getBoundingClientRect().width;
if (containerWidth <= 0) {
// no content, component probably has unmounted
return;
}
if ((leftPosition * -1) > containerWidth) {
// when leftPosition (inversed, since has negative value) is greater than containerWidth,
// this means that all elements has passed the screen so we restart the ticker
console.log("Restarting RSS ticker");
leftPosition = startPosition;
ticker.style.left = leftPosition + "px";
this.getRssFeed(this.props.rssTicker?.rssTicker?.url);
} else {
const timeToCross = 25;
// use fps to set the new position to avoid slow animations on weaker devices
const step = window.innerWidth / timeToCross / fps;
leftPosition = leftPosition - step;
ticker.style.left = leftPosition + "px";
}
this.animationFrameID = requestAnimationFrame(() => this.animateTicker(lastFrameCalled, fps, startPosition, leftPosition, ticker));
}
How can I create one of these sound effects with Tone.js notes
Is this even possible? When given are these notes:
"C","C#","Db","D","D#","Eb","E","E#","Fb","F","F#","Gb","G","G#","Ab","A","A#","Bb","B","B#","Cb"...
Can I now somehow use tone.js to create a sound effect like "Tada!"? I think it needs more than just the notes/tones, it needs also somehow pitching and time manimulation?
Simple C tone played for 400ms:
polySynth.triggerAttack("C");
setTimeout(x=>polySynth.triggerRelease("C"),400);
Here a working Jsfiddle with Tone.js to experiment.
I don't have a very experienced ear, but most of these sound like major chords (base, third, fifth) to me, some with an added octave. For example, C4, E4, G4, C5:
const chord = ["C4", "E4", "G4", "C5"];
const duration = 0.5;
const delay = 0.05;
const now = Tone.now();
for (let i = 0; i < chord.length; i++) {
const note = chord[i];
polySynth.triggerAttackRelease(note, duration, now + i * delay);
}
If you want to randomize the root note, it might be useful to work with frequencies directly, instead of note names. The A above middle C is usually taken as 440 Hz, and each successive semi-tone above that is a factor of Math.pow(2, 1/12) higher:
const rootFrequency = 440;
const chordSemitones = [0, 4, 7, 12];
const duration = 0.5;
const delay = 0.1;
const now = Tone.now();
for (let i = 0; i < chordSemitones.length; i++) {
const pitch = rootFrequency * Math.pow(2, chordSemitones[i] / 12);
polySynth.triggerAttackRelease(pitch, duration, now + i * delay);
}
I'm using the Web Audio API to create a simple spectrum analyzer using the computer microphone as the input signal. The basic functionality of my current implementation works fine, using the default sampling rate (usually 48KHz, but could be 44.1KHz depending on the browser).
For some applications, I would like to use a lower sampling rate (~8KHz) for the FFT.
It looks like the Web Audio API is adding support to customize the sample rate, currently only available on FireFox (https://developer.mozilla.org/en-US/docs/Web/API/AudioContextOptions/sampleRate).
Adding sample rate to the context constructor:
// create AudioContext object named 'audioCtx'
var audioCtx = new (AudioContext || webkitAudioContext)({sampleRate: 8000,});
console.log(audioCtx.sampleRate)
The console outputs '8000' (in FireFox), so it appears to be working up to this point.
The microphone is turned on by the user using a pull-down menu. This is the function servicing that pull-down:
var microphone;
function getMicInputState()
{
let selectedValue = document.getElementById("micOffOn").value;
if (selectedValue === "on") {
navigator.mediaDevices.getUserMedia({audio: true})
.then(stream => {
microphone = audioCtx.createMediaStreamSource(stream);
microphone.connect(analyserNode);
})
.catch(err => { alert("Microphone is required."); });
} else {
microphone.disconnect();
}
}
In FireFox, using the pulldown to activate the microphone displays a popup requesting access to the microphone (as normally expected). After clicking to allow the microphone, the console displays:
"Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported".
The display of the spectrum analyzer remains blank.
Any ideas how to overcome this error? If we can get past this, any guidance on how to specify sampleRate when the user's soundcard sampling rate is unknown?
One approach to overcome this is passing audio packets captured from microphone to analyzer node via a script processor node that re-samples the audio packets passing through it.
Brief overview of script processor node
Every script processor node has an input buffer and an output buffer.
When audio enters the input buffer, the script processor node fires
onaudioprocess event.
Whatever is placed in the output buffer of script processor node becomes its output.
For detailed specs, refer : Script processor node
Here is the pseudo-code:
Create live media source, script processor node and analyzer node
Connect live media source to analyzer node via script processor
node
Whenever an audio packet enters the script processor
node, onaudioprocess event is fired
When onaudioprocess event is fired :
4.1) Extract audio data from input buffer
4.2) Re-sample audio data
4.3) Place re-sampled data in output buffer
The following code snippet implements the above pseudocode:
var microphone;
// *** 1) create a script processor node
var scriptProcessorNode = audioCtx.createScriptProcessor(4096, 1, 1);
function getMicInputState()
{
let selectedValue = document.getElementById("micOffOn").value;
if (selectedValue === "on") {
navigator.mediaDevices.getUserMedia({audio: true})
.then(stream => {
microphone = audioCtx.createMediaStreamSource(stream);
// *** 2) connect live media source to analyserNode via script processor node
microphone.connect(scriptProcessorNode);
scriptProcessorNode.connect(analyserNode);
})
.catch(err => { alert("Microphone is required."); });
} else {
microphone.disconnect();
}
}
// *** 3) Whenever an audio packet passes through script processor node, resample it
scriptProcessorNode.onaudioprocess = function(event){
var inputBuffer = event.inputBuffer;
var outputBuffer = event.outputBuffer;
for(var channel = 0; channel < outputBuffer.numberOfChannels; channel++){
var inputData = inputBuffer.getChannelData(channel);
var outputData = outputBuffer.getChannelData(channel);
// *** 3.1) Resample inputData
var fromSampleRate = audioCtx.sampleRate;
var toSampleRate = 8000;
var resampledAudio = downsample(inputData, fromSampleRate, toSampleRate);
// *** 3.2) make output equal to the resampled audio
for (var sample = 0; sample < outputData.length; sample++) {
outputData[sample] = resampledAudio[sample];
}
}
}
function downsample(buffer, fromSampleRate, toSampleRate) {
// buffer is a Float32Array
var sampleRateRatio = Math.round(fromSampleRate / toSampleRate);
var newLength = Math.round(buffer.length / sampleRateRatio);
var result = new Float32Array(newLength);
var offsetResult = 0;
var offsetBuffer = 0;
while (offsetResult < result.length) {
var nextOffsetBuffer = Math.round((offsetResult + 1) * sampleRateRatio);
var accum = 0, count = 0;
for (var i = offsetBuffer; i < nextOffsetBuffer && i < buffer.length; i++) {
accum += buffer[i];
count++;
}
result[offsetResult] = accum / count;
offsetResult++;
offsetBuffer = nextOffsetBuffer;
}
return result;
}
Update - 03 Nov, 2020
Script Processor Node is being deprecated and replaced with AudioWorklets.
The approach to changing the sample rate remains the same.
Downsampling from the constructor and connecting an AnalyserNode is now possible in Chrome and Safari.
So the following code, taken from the corresponding MDN documentation, would work:
const audioContext = new (window.AudioContext || window.webkitAudioContext)({
sampleRate: 8000
});
const mediaStream = await navigator.mediaDevices.getUserMedia({
audio: true,
video: false
});
const mediaStreamSource = audioContext.createMediaStreamSource(mediaStream);
const analyser = audioContext.createAnalyser();
analyser.fftSize = 256;
const bufferLength = analyser.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
analyser.getByteFrequencyData(dataArray);
mediaStreamSource.connect(analyser);
const title = document.createElement("div");
title.innerText = `Sampling frequency 8kHz:`;
const wrapper = document.createElement("div");
const canvas = document.createElement("canvas");
wrapper.appendChild(canvas);
document.body.appendChild(title);
document.body.appendChild(wrapper);
const canvasCtx = canvas.getContext("2d");
function draw() {
requestAnimationFrame(draw);
analyser.getByteFrequencyData(dataArray);
canvasCtx.fillStyle = "rgb(0, 0, 0)";
canvasCtx.fillRect(0, 0, canvas.width, canvas.height);
var barWidth = canvas.width / bufferLength;
var barHeight = 0;
var x = 0;
for (var i = 0; i < bufferLength; i++) {
barHeight = dataArray[i] / 2;
canvasCtx.fillStyle = "rgb(" + (2 * barHeight + 100) + ",50,50)";
canvasCtx.fillRect(x, canvas.height - barHeight / 2, barWidth, barHeight);
x += barWidth + 1;
}
}
draw();
See here for a demo where both 48kHz and 8kHz sampled signal frequencies are displayed: https://codesandbox.io/s/vibrant-moser-cfex33
I have some sample data of vibrations analysis from sensors installed on electrical motors. The sampling is made once or, at most, 3 times per day. The values can be expressed in g, gE or mm/s.
I’m developing a personal algorithm in JavaScript to process some samples and perform a DFT. It’s a simple code that uses brute force to process my results. I compared the results (real and imaginary parts) from JavaScript and from MATLAB results and they matched perfectly.
However, my sampling rate is very slow. Because of this, I have a lot of questions which I couldn’t find the answers on my searches:
Is it possible to apply a DFT analysis on a slow sampling data as this?
How can I determine the correct frequency scale for the X axis? It’s complicated for me because I don’t have an explicit Fs (sampling rate) value.
In my case, would it be interesting to apply some window function like Hanning Window (suitable for vibrations analyses)?
JavaScriptCode:
//Signal is a pure one-dimensional of real data (vibration values)
const fft = (signal) => {
const pi2 = 6.2832 //pi const
let inputLength = signal.length;
let Xre = new Array(inputLength); //DFT real part
let Xim = new Array(inputLength); //DFT imaginary part
let P = new Array(inputLength); //Power of spectrum
let M = new Array(inputLength); //Magnitude of spectrum
let angle = 2 * Math.PI / inputLength;
//Hann Window
signal = signal.map((x, index) => {
return x * 0.5 * (1 - Math.cos((2 * Math.PI * index) / (inputLength - 1)));
});
for (let k = 0; k < inputLength; ++k) { // For each output element
Xre[k] = 0; Xim[k] = 0;
for (let n = 0; n < inputLength; ++n) { // For each input element
Xre[k] += signal[n] * Math.cos(angle * k * n);
Xim[k] -= signal[n] * Math.sin(angle * k * n);
}
P[k] = Math.pow(Xre[k], 2) + Math.pow(Xim[k], 2);
M[k] = Math.sqrt(Math.pow(Xre[k], 2) + Math.pow(Xim[k], 2));
}
return { Xre: Xre, Xim: Xim, P: P, M: M.slice(0, Math.round((inputLength / 2) + 1)) };
}
The first figure shows the charts results (time domain on the left side and frequency domain on the right side).
The second figure shows a little bit of my data samples:
Obs.: I'm sorry for the writing. I'm still a beginner English student.
The frequency doesn't matter. A frequency as low as 1/day is just as fine as any other frequency. But consider the Nyquist-Shannon theorem.
This is problematic. You need a fix sampling frequency for a DFT. You could do interpolation as preprocessing. But better would be to do the sampling at fix times.
I am building a web app which allows users to listen to a loop of instrumental music and then record vocals on top. This is all working using Recorder.js however there are a few problems:
There is latency with recording, so this needs to be set by the user before pressing record.
The exported loop is not always the same length as the sample rate might not match the time needed exactly
However since then I went back to the drawing board and asked: What's best for the user?. This gave me a new set of requirements:
Backing loop plays continuously in the background
Recording starts and stops whenever the user chooses
Recording then plays back in sync with loop (the dead time between loops is automatically filled with blank audio)
User can slide an offset slider to adjust for small timing issues with latency
User can select which portion of the recording to save (same length as original backing loop)
Here's a diagram of how that would look:
Logic I have so far:
// backing loop
a.startTime = 5
a.duration = 10
a.loop = true
// recording
b.startTime = 22.5
b.duration = 15
b.loop = false
// fill blank space + loop
fill = a.duration - (b.duration % a.duration) // 5
c = b.buffers + (fill * blankBuffers)
c.startTime = (context.currentTime - a.startTime) % a.duration
c.duration = 20
c.loop = true
// user corrects timing offset
c.startTime = ((context.currentTime - a.startTime) % a.duration) - offset
// user choose favourite loop
? this is where I start to lose the plot!
Here is an example of chopping the buffers sent from Recorder.js:
// shorten the length of buffers
start = context.sampleRate * 2; // start at 2 seconds
end = context.sampleRate * 3; // end at 3 seconds
buffers.push(buffers.subarray(start, end));
And more example code from the previous versions i've been working on:
https://github.com/mattdiamond/Recorderjs/issues/105
Any help in working out how to slice the buffers for the exported loop or improving this logic would be greatly appreciated!
UPDATE
Using this example I was able to find out how to insert blank space into the recording:
http://mdn.github.io/audio-buffer/
I've now managed to almost replicate the functionality I need, however the white noise seems off. Is there a miscalculation somewhere?
http://kmturley.github.io/Recorderjs/loop.html
I managed to solve this by writing the following logic
diff = track2.startTime - track1.startTime
before = Math.round((diff % track1.duration) * 44100)
after = Math.round((track1.duration - ((diff + track2.duration) % track1.duration)) * 44100)
newAudio = [before data] + [recording data] + [after data]
and in javascript code it looks like this:
var i = 0,
channel = 0,
channelTotal = 2,
num = 0,
vocalsRecording = this.createBuffer(vocalsBuffers, channelTotal),
diff = this.recorder.startTime - backingInstance.startTime + (offset / 1000),
before = Math.round((diff % backingInstance.buffer.duration) * this.context.sampleRate),
after = Math.round((backingInstance.buffer.duration - ((diff + vocalsRecording.duration) % backingInstance.buffer.duration)) * this.context.sampleRate),
audioBuffer = this.context.createBuffer(channelTotal, before + vocalsBuffers[0].length + after, this.context.sampleRate),
buffer = null;
// loop through the audio left, right channels
for (channel = 0; channel < channelTotal; channel += 1) {
buffer = audioBuffer.getChannelData(channel);
// fill the empty space before the recording
for (i = 0; i < before; i += 1) {
buffer[num] = 0;
num += 1;
}
// add the recording data
for (i = 0; i < vocalsBuffers[channel].length; i += 1) {
buffer[num] = vocalsBuffers[channel][i];
num += 1;
}
// fill the empty space at the end of the recording
for (i = 0; i < after; i += 1) {
buffer[num] = 0;
num += 1;
}
}
// now return the new audio which should be the exact same length
return audioBuffer;
You can view a full working example here:
http://kmturley.github.io/Recorderjs/loop.html