Swift 3 Getting audio stream - javascript

I am very new to Swift programming, so please pardon me.
What I am trying to do is Im trying to do a voice recording, and below is the method to get the stream from microphone of the iPhone. I am successful at storing the stream to a WAV file inside the device by using this method. I confirmed that the file is OK and is playing fine after the recording
osErr = ExtAudioFileWrite(destinationFile!, frameCount, inputDataList)
What i am unsuccessful in doing is capturing the audio stream, pass the stream to the WKWebview as base64 encoded data, converting the data into a Blob then pass the Blob as a source in HTML5 Audio tag using
URL.createObjectURL(new Blob( [blob], { type:{"audio/wav"} })
The Audio element just throws an Error and does not play the blob. I dont know if what I am getting at the sendAudioBuffer part of my code is the correct data that i should pass.
I am really very thankful for any bits of information. The original code where i got the method is from here
https://gist.github.com/hotpaw2/ba815fc23b5d642705f2b1dedfaf0107
func processMicrophoneBuffer( inputDataList: UnsafeMutablePointer<AudioBufferList>, frameCount : UInt32 ) {
let inputDataPtr = UnsafeMutableAudioBufferListPointer(inputDataList)
let mBuffers : AudioBuffer = inputDataPtr[0]
let count = Int(frameCount)
let bufferPointer = UnsafeMutableRawPointer(mBuffers.mData)
if let bptr = bufferPointer {
let dataArray = bptr.assumingMemoryBound(to: Float.self)
var sum : Float = 0.0
var j = self.circInIdx
let m = self.circBuffSize
for i in 0..<(count/2) {
let x = Float(dataArray[i+i ]) // copy left channel sample
let y = Float(dataArray[i+i+1]) // copy right channel sample
self.circBuffer[j ] = x
self.circBuffer[j + 1] = y
j += 2 ; if j >= m { j = 0 } // into circular buffer
sum += x * x + y * y
}
self.circInIdx = j // circular index will always be less than size
if sum > 0.0 && count > 0 {
let tmp = 5.0 * (logf(sum / Float(count)) + 20.0)
let r : Float = 0.2
audioLevel = r * tmp + (1.0 - r) * audioLevel
}
}
//This will write to a file
osErr = ExtAudioFileWrite(destinationFile!, frameCount, inputDataList)
//This will get the audio stream and send to WKWebview to be converted to Blob
sendAudioBuffer(buffer: Data(bytes: bufferPointer!, count: count))
}

... converting the
data into a Blob then pass the Blob as a source in HTML5 Audio tag
using
URL.createObjectURL(new Blob( [blob], { type:{"audio/wav"} })
You have wrong syntax in the row above. The correct one is:
URL.createObjectURL(new Blob([blob], {type:"audio/wav"}))
Also if your blob is really a base64 string, you can get rid of createObjectURL and set the src attribute of your <source> in HTML5 Audio tag as a data-uri e. g.:
document.querySelector('#my_audio source').src = 'data:audio/wav;base64,' + blob;
Hope it helps.

Related

Pica: cannot use getImageData on canvas, make sure fingerprinting protection isn't enabled

I am getting this error while using npm pica library. I want to resize image using this library. Have tried two ways
I. by passing image url directly to pica.resize(imageurl, canvas) and
II. by passing image buffer to pica.resize(imageBuffer,canvas). But it shows error Pica: cannot use getImageData on canvas, make sure fingerprinting protection isn't enabled. when I run code.
// Convert Base64 to BufferArray
const buf = Buffer.from(img, 'base64');
let binary = '';
const chunk = 8 * 1024;
let i;
for (i = 0; i < buf.length / chunk; i += 1) {
binary += String.fromCharCode.apply(null, [
...buf.slice(i * chunk, (i + 1) * chunk),
]);
}
binary += String.fromCharCode.apply(null, [...buf.slice(i * chunk)]);
// Dimensions for the image
const width = 1200;
const height = 627;
// Instantiate the canvas object
const can = canvas.createCanvas(width,height);
// const context = canvas.getContext("2d");
console.log(binary)
pica.resize(binary, can)
.then(result => console.log(result));

JS OffscreenCanvas.toDataURL

HTMLCanvasElement has toDataURL(), but OffscreenCanvas does not have. What a surprise.
Ok, so how can i get this toDataURL() to work with Worker-s? I have a ready canvas (fully drawn), and can send it to a Worker. But then what can i do from there?
The only solution i have, is to manually do all operations to create an image/png. So i found this page from 2010. (I am not sure if that is what i need though.) And further it provides this code, from where it generates a PNG and makes it to base64.
And my final question:
1 - Is there some reasonable way to get toDataURL() from Worker, OR
2 - Is there any library or something designed to for this job, OR
3 - Using all functionalities of HTMLCanvasElement and OffscreenCanvas, how should the following code be adapted to replace toDataURL()?
Here are the two functions from the code im linking to. (They are really complicated for me, and i understand almost nothing from getDump())
// output a PNG string
this.getDump = function() {
// compute adler32 of output pixels + row filter bytes
var BASE = 65521; /* largest prime smaller than 65536 */
var NMAX = 5552; /* NMAX is the largest n such that 255n(n+1)/2 + (n+1)(BASE-1) <= 2^32-1 */
var s1 = 1;
var s2 = 0;
var n = NMAX;
for (var y = 0; y < this.height; y++) {
for (var x = -1; x < this.width; x++) {
s1+= this.buffer[this.index(x, y)].charCodeAt(0);
s2+= s1;
if ((n-= 1) == 0) {
s1%= BASE;
s2%= BASE;
n = NMAX;
}
}
}
s1%= BASE;
s2%= BASE;
write(this.buffer, this.idat_offs + this.idat_size - 8, byte4((s2 << 16) | s1));
// compute crc32 of the PNG chunks
function crc32(png, offs, size) {
var crc = -1;
for (var i = 4; i < size-4; i += 1) {
crc = _crc32[(crc ^ png[offs+i].charCodeAt(0)) & 0xff] ^ ((crc >> 8) & 0x00ffffff);
}
write(png, offs+size-4, byte4(crc ^ -1));
}
crc32(this.buffer, this.ihdr_offs, this.ihdr_size);
crc32(this.buffer, this.plte_offs, this.plte_size);
crc32(this.buffer, this.trns_offs, this.trns_size);
crc32(this.buffer, this.idat_offs, this.idat_size);
crc32(this.buffer, this.iend_offs, this.iend_size);
// convert PNG to string
return "\211PNG\r\n\032\n"+this.buffer.join('');
}
Here it is quite clear what is going on:
// output a PNG string, Base64 encoded
this.getBase64 = function() {
var s = this.getDump();
// If the current browser supports the Base64 encoding
// function, then offload the that to the browser as it
// will be done in native code.
if ((typeof window.btoa !== 'undefined') && (window.btoa !== null)) {
return window.btoa(s);
}
var ch = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
var c1, c2, c3, e1, e2, e3, e4;
var l = s.length;
var i = 0;
var r = "";
do {
c1 = s.charCodeAt(i);
e1 = c1 >> 2;
c2 = s.charCodeAt(i+1);
e2 = ((c1 & 3) << 4) | (c2 >> 4);
c3 = s.charCodeAt(i+2);
if (l < i+2) { e3 = 64; } else { e3 = ((c2 & 0xf) << 2) | (c3 >> 6); }
if (l < i+3) { e4 = 64; } else { e4 = c3 & 0x3f; }
r+= ch.charAt(e1) + ch.charAt(e2) + ch.charAt(e3) + ch.charAt(e4);
} while ((i+= 3) < l);
return r;
}
Thanks
First, I'll note you probably don't want a data URL of your image file, data URLs are really a less performant way to deal with files than their binary equivalent Blob, and almost you can do with a data URL can actually and should generally be done with a Blob and a Blob URI instead.
Now that's been said, you can very well still generate a data URL from an OffscreenCanvas.
This is a two step process:
call its convertToBlob method
read the generated Blob as a dataURL using a FileReader[Sync]
const worker = new Worker(getWorkerURL());
worker.onmessage = e => console.log(e.data);
function getWorkerURL() {
return URL.createObjectURL(
new Blob([worker_script.textContent])
);
}
<script id="worker_script" type="ws">
const canvas = new OffscreenCanvas(150,150);
const ctx = canvas.getContext('webgl');
canvas[
canvas.convertToBlob
? 'convertToBlob' // specs
: 'toBlob' // current Firefox
]()
.then(blob => {
const dataURL = new FileReaderSync().readAsDataURL(blob);
postMessage(dataURL);
});
</script>
Since what you want is actually to render what this OffscreenCanvas did produce, you'd be better to generate your OffscreenCanvas by transferring the control of a visible one.
This way you can send the ImageBitmap directly to the UI without any memory overhead.
const offcanvas = document.getElementById('canvas')
.transferControlToOffscreen();
const worker = new Worker(getWorkerURL());
worker.postMessage({canvas: offcanvas}, [offcanvas]);
function getWorkerURL() {
return URL.createObjectURL(
new Blob([worker_script.textContent])
);
}
<canvas id="canvas"></canvas>
<script id="worker_script" type="ws">
onmessage = e => {
const canvas = e.data.canvas;
const gl = canvas.getContext('webgl');
gl.viewport(0, 0,
gl.drawingBufferWidth, gl.drawingBufferHeight);
gl.enable(gl.SCISSOR_TEST);
// make some slow noise (we're in a Worker)
for(let y=0; y<gl.drawingBufferHeight; y++) {
for(let x=0; x<gl.drawingBufferWidth; x++) {
gl.scissor(x, y, 1, 1);
gl.clearColor(Math.random(), Math.random(), Math.random(), 1.0);
gl.clear(gl.COLOR_BUFFER_BIT);
}
}
// draw to visible <canvas> in FF
if(gl.commit) gl.commit();
};
</script>
If you really absolutely need an <img>, then create a BlobURI from the generated Blob. But note that doing so, you do keep the image in memory once (which is still far better than the thrice induced by data URL, but still, don't do this with animated content).
const worker = new Worker(getWorkerURL());
worker.onmessage = e => {
document.getElementById('img').src = e.data;
}
function getWorkerURL() {
return URL.createObjectURL(
new Blob([worker_script.textContent])
);
}
img.onerror = e => {
document.body.textContent = '';
const a = document.createElement('a');
a.href = "https://jsfiddle.net/5yhg2c9L/";
a.textContent = "Your browser doesn't like StackSnippet's null origined iframe, please try again from this jsfiddle";
document.body.append(a);
};
<img id="img">
<script id="worker_script" type="ws">
const canvas = new OffscreenCanvas(150,150);
const gl = canvas.getContext('webgl');
gl.viewport(0, 0,
gl.drawingBufferWidth, gl.drawingBufferHeight);
gl.enable(gl.SCISSOR_TEST);
// make some slow noise (we're in a Worker)
for(let y=0; y<gl.drawingBufferHeight; y++) {
for(let x=0; x<gl.drawingBufferWidth; x++) {
gl.scissor(x, y, 1, 1);
gl.clearColor(Math.random(), Math.random(), Math.random(), 1.0);
gl.clear(gl.COLOR_BUFFER_BIT);
}
}
canvas[
canvas.convertToBlob
? 'convertToBlob' // specs
: 'toBlob' // current Firefox
]()
.then(blob => {
const blobURL = URL.createObjectURL(blob);
postMessage(blobURL);
});
</script>
(Note that you could also transfer an ImageBitmap from the Worker to the main thread and then draw it on a visible canvas, but in this case, using a tranferred context is even better.)
Blob is great, but actually toBlob && convertToBlob are much much slower compared to toDataUrl on Chrome, I really dont understand why....
On chrome, a 1920*1080 canvas, toDataUrl toke me 20ms, toBlob toke 300-500ms, convertToBlob toke 800ms.
So if the performance is an issue, I would rather use toDataUrl in main thread.

Reducing sample rate of a Web Audio spectrum analyzer using mic input

I'm using the Web Audio API to create a simple spectrum analyzer using the computer microphone as the input signal. The basic functionality of my current implementation works fine, using the default sampling rate (usually 48KHz, but could be 44.1KHz depending on the browser).
For some applications, I would like to use a lower sampling rate (~8KHz) for the FFT.
It looks like the Web Audio API is adding support to customize the sample rate, currently only available on FireFox (https://developer.mozilla.org/en-US/docs/Web/API/AudioContextOptions/sampleRate).
Adding sample rate to the context constructor:
// create AudioContext object named 'audioCtx'
var audioCtx = new (AudioContext || webkitAudioContext)({sampleRate: 8000,});
console.log(audioCtx.sampleRate)
The console outputs '8000' (in FireFox), so it appears to be working up to this point.
The microphone is turned on by the user using a pull-down menu. This is the function servicing that pull-down:
var microphone;
function getMicInputState()
{
let selectedValue = document.getElementById("micOffOn").value;
if (selectedValue === "on") {
navigator.mediaDevices.getUserMedia({audio: true})
.then(stream => {
microphone = audioCtx.createMediaStreamSource(stream);
microphone.connect(analyserNode);
})
.catch(err => { alert("Microphone is required."); });
} else {
microphone.disconnect();
}
}
In FireFox, using the pulldown to activate the microphone displays a popup requesting access to the microphone (as normally expected). After clicking to allow the microphone, the console displays:
"Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported".
The display of the spectrum analyzer remains blank.
Any ideas how to overcome this error? If we can get past this, any guidance on how to specify sampleRate when the user's soundcard sampling rate is unknown?
One approach to overcome this is passing audio packets captured from microphone to analyzer node via a script processor node that re-samples the audio packets passing through it.
Brief overview of script processor node
Every script processor node has an input buffer and an output buffer.
When audio enters the input buffer, the script processor node fires
onaudioprocess event.
Whatever is placed in the output buffer of script processor node becomes its output.
For detailed specs, refer : Script processor node
Here is the pseudo-code:
Create live media source, script processor node and analyzer node
Connect live media source to analyzer node via script processor
node
Whenever an audio packet enters the script processor
node, onaudioprocess event is fired
When onaudioprocess event is fired :
4.1) Extract audio data from input buffer
4.2) Re-sample audio data
4.3) Place re-sampled data in output buffer
The following code snippet implements the above pseudocode:
var microphone;
// *** 1) create a script processor node
var scriptProcessorNode = audioCtx.createScriptProcessor(4096, 1, 1);
function getMicInputState()
{
let selectedValue = document.getElementById("micOffOn").value;
if (selectedValue === "on") {
navigator.mediaDevices.getUserMedia({audio: true})
.then(stream => {
microphone = audioCtx.createMediaStreamSource(stream);
// *** 2) connect live media source to analyserNode via script processor node
microphone.connect(scriptProcessorNode);
scriptProcessorNode.connect(analyserNode);
})
.catch(err => { alert("Microphone is required."); });
} else {
microphone.disconnect();
}
}
// *** 3) Whenever an audio packet passes through script processor node, resample it
scriptProcessorNode.onaudioprocess = function(event){
var inputBuffer = event.inputBuffer;
var outputBuffer = event.outputBuffer;
for(var channel = 0; channel < outputBuffer.numberOfChannels; channel++){
var inputData = inputBuffer.getChannelData(channel);
var outputData = outputBuffer.getChannelData(channel);
// *** 3.1) Resample inputData
var fromSampleRate = audioCtx.sampleRate;
var toSampleRate = 8000;
var resampledAudio = downsample(inputData, fromSampleRate, toSampleRate);
// *** 3.2) make output equal to the resampled audio
for (var sample = 0; sample < outputData.length; sample++) {
outputData[sample] = resampledAudio[sample];
}
}
}
function downsample(buffer, fromSampleRate, toSampleRate) {
// buffer is a Float32Array
var sampleRateRatio = Math.round(fromSampleRate / toSampleRate);
var newLength = Math.round(buffer.length / sampleRateRatio);
var result = new Float32Array(newLength);
var offsetResult = 0;
var offsetBuffer = 0;
while (offsetResult < result.length) {
var nextOffsetBuffer = Math.round((offsetResult + 1) * sampleRateRatio);
var accum = 0, count = 0;
for (var i = offsetBuffer; i < nextOffsetBuffer && i < buffer.length; i++) {
accum += buffer[i];
count++;
}
result[offsetResult] = accum / count;
offsetResult++;
offsetBuffer = nextOffsetBuffer;
}
return result;
}
Update - 03 Nov, 2020
Script Processor Node is being deprecated and replaced with AudioWorklets.
The approach to changing the sample rate remains the same.
Downsampling from the constructor and connecting an AnalyserNode is now possible in Chrome and Safari.
So the following code, taken from the corresponding MDN documentation, would work:
const audioContext = new (window.AudioContext || window.webkitAudioContext)({
sampleRate: 8000
});
const mediaStream = await navigator.mediaDevices.getUserMedia({
audio: true,
video: false
});
const mediaStreamSource = audioContext.createMediaStreamSource(mediaStream);
const analyser = audioContext.createAnalyser();
analyser.fftSize = 256;
const bufferLength = analyser.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
analyser.getByteFrequencyData(dataArray);
mediaStreamSource.connect(analyser);
const title = document.createElement("div");
title.innerText = `Sampling frequency 8kHz:`;
const wrapper = document.createElement("div");
const canvas = document.createElement("canvas");
wrapper.appendChild(canvas);
document.body.appendChild(title);
document.body.appendChild(wrapper);
const canvasCtx = canvas.getContext("2d");
function draw() {
requestAnimationFrame(draw);
analyser.getByteFrequencyData(dataArray);
canvasCtx.fillStyle = "rgb(0, 0, 0)";
canvasCtx.fillRect(0, 0, canvas.width, canvas.height);
var barWidth = canvas.width / bufferLength;
var barHeight = 0;
var x = 0;
for (var i = 0; i < bufferLength; i++) {
barHeight = dataArray[i] / 2;
canvasCtx.fillStyle = "rgb(" + (2 * barHeight + 100) + ",50,50)";
canvasCtx.fillRect(x, canvas.height - barHeight / 2, barWidth, barHeight);
x += barWidth + 1;
}
}
draw();
See here for a demo where both 48kHz and 8kHz sampled signal frequencies are displayed: https://codesandbox.io/s/vibrant-moser-cfex33

How to save binary buffer to png file in nodejs?

I have binary nodejs Buffer object that contains bitmap information. How do make image from the buffer and save it to file?
Edit:
I tried using the file system package as #herchu said but if I do this:
let robot = require("robotjs")
let fs = require('fs')
let size = 200
let img = robot.screen.capture(0, 0, size, size)
let path = 'myfile.png'
let buffer = img.image
fs.open(path, 'w', function (err, fd) {
if (err) {
// Something wrong creating the file
}
fs.write(fd, buffer, 0, buffer.length, null, function (err) {
// Something wrong writing contents!
})
})
I get
Although solutions by #herchu and #Jake work, they are extremely slow (10-15s in my experience).
Jimp supports converting Raw Pixel Buffer into PNG out-of-the-box and works a lot faster (sub-second).
const img = robot.screen.capture(0, 0, width, height).image;
new Jimp({data: img, width, height}, (err, image) => {
image.write(fileName);
});
Note: I am editing my answer according to your last edits
If you are using Robotjs, check that its Bitmap object contains a Buffer to raw pixels data -- not a PNG or any other file format contents, just pixels next to each other (exactly 200 x 200 elements in your case).
I have not found any function to write contents in other format in the Robotjs library (not that I know it either), so in this answer I am using a different library, Jimp, for the image manipulation.
let robot = require("robotjs")
let fs = require('fs')
let Jimp = require('jimp')
let size = 200
let rimg = robot.screen.capture(0, 0, size, size)
let path = 'myfile.png'
// Create a new blank image, same size as Robotjs' one
let jimg = new Jimp(size, size);
for (var x=0; x<size; x++) {
for (var y=0; y<size; y++) {
// hex is a string, rrggbb format
var hex = rimg.colorAt(x, y);
// Jimp expects an Int, with RGBA data,
// so add FF as 'full opaque' to RGB color
var num = parseInt(hex+"ff", 16)
// Set pixel manually
jimg.setPixelColor(num, x, y);
}
}
jimg.write(path)
Note that the conversion is done by manually iterating through all pixels; this is slow in JS. Also there are some details on how each library handles their pixel format, so some manipulation was needed in the loop -- it should be clear from the embedded comments.
Adding this as an addendum to accepted answer from #herchu, this code sample processes/converts the raw bytes much more quickly (< 1s for me for a full screen). Hope this is helpful to someone.
var jimg = new Jimp(width, height);
for (var x=0; x<width; x++) {
for (var y=0; y<height; y++) {
var index = (y * rimg.byteWidth) + (x * rimg.bytesPerPixel);
var r = rimg.image[index];
var g = rimg.image[index+1];
var b = rimg.image[index+2];
var num = (r*256) + (g*256*256) + (b*256*256*256) + 255;
jimg.setPixelColor(num, x, y);
}
}
Four times faster!
About 280ms and 550Kb for full screen 1920x1080, if use this script.
I found this pattern when I compared 2 byte threads per byte to the forehead.
const robotjs = require('robotjs');
const Jimp = require('jimp');
const app = require('express').Router();
app.get('/screenCapture', (req, res)=>{
let image = robotjs.screen.capture();
for(let i=0; i < image.image.length; i++){
if(i%4 == 0){
[image.image[i], image.image[i+2]] = [image.image[i+2], image.image[i]];
}
}
var jimg = new Jimp(image.width, image.height);
jimg.bitmap.data = image.image;
jimg.getBuffer(Jimp.MIME_PNG, (err, result)=>{
res.set('Content-Type', Jimp.MIME_PNG);
res.send(result);
});
});
If you add this code before jimp.getBuffer you'll get about 210ms and 320Kb for full screen
jimg.rgba(true);
jimg.filterType(1);
jimg.deflateLevel(5);
jimg.deflateStrategy(1);
I suggest you to take a look on sharp as it has superior performance metrics over jimp.
The issue with robotjs screen capturing, which actually happened to be very efficient, is BGRA color model and not RGBA. So you would need to do additional color rotation.
Also, as we take screenshot from the desktop I can't imagine the case where we would need transperency. So, I suggest to ignore it.
const [left, top, width, height] = [0, 0, 100, 100]
const channels = 3
const {image, width: cWidth, height: cHeight, bytesPerPixel, byteWidth} = robot.screen.capture(left, right, width, height)
const uint8array = new Uint8Array(cWidth*cHeight*channels);
for(let h=0; h<cHeight; h+=1) {
for(let w=0; w<cWidth; w+=1) {
let offset = (h*cWidth + w)*channels
let offset2 = byteWidth*h + w*bytesPerPixel
uint8array[offset] = image.readUInt8(offset2 + 2)
uint8array[offset + 1] = image.readUInt8(offset2 + 1)
uint8array[offset + 2] = image.readUInt8(offset2 + 0)
}
}
await sharp(Buffer.from(uint8array), {
raw: {
width: cWidth,
height: cHeight,
channels,
}
}).toFile('capture.png')
I use intermediate array here, but you actually can just to swap in the result of the screen capture.

merging / layering multiple ArrayBuffers into one AudioBuffer using Web Audio API

I need to layer looping .wav tracks that ultimately I will need to be able to turn on and off and keep in sync.
First I load the tracks and stopped BufferLoader from turning the loaded arraybuffer into an AudioBuffer (hence the false)
function loadTracks(data) {
for (var i = 0; i < data.length; i++) {
trackUrls.push(data[i]['url']);
};
bufferLoader = new BufferLoader(context, trackUrls, finishedLoading);
bufferLoader.load(false);
return loaderDefered.promise;
}
When you click a button on screen it calls startStop().
function startStop(index, name, isPlaying) {
if(!activeBuffer) {
activeBuffer = bufferList[index];
}else{
activeBuffer = appendBuffer(activeBuffer, bufferList[index]);
}
context.decodeAudioData(activeBuffer, function(buffer){
audioBuffer = buffer;
play();
})
function play() {
var scheduledTime = 0.015;
try {
audioSource.stop(scheduledTime);
} catch (e) {}
audioSource = context.createBufferSource();
audioSource.buffer = audioBuffer;
audioSource.loop = true;
audioSource.connect(context.destination);
var currentTime = context.currentTime + 0.010 || 0;
audioSource.start(scheduledTime - 0.005, currentTime, audioBuffer.duration - currentTime);
audioSource.playbackRate.value = 1;
}
Most of the code I found on this guys github.
In the demo you can hear he is layering AudioBuffers.
I have tried the same on my hosting.
Disregarding the argularJS stuff, the Web Audio stuff is happening on the service.js at:
/js/angular/service.js
If you open the console and click the buttons you can see the activeBuffer.byteLength (type ArrayBuffer) is incrementing, however even after being decoded by the context.decodeAudioData method it still only plays the first sound you clicked instead of a merged AudioBuffer
I'm not sure I totally understand your scenario - don't you want these to be playing simultaneously? (i.e. bass gets layered on top of the drums).
Your current code is trying to concatenate an additional audio file whenever you hit the button for that file. You can't just concatenate audio files (in their ENCODED form) and then run it through decode - the decodeAudioData method is decoding the first complete sound in the arraybuffer, then stopping (because it's done decoding the sound).
What you should do is change the logic to concatenate the buffer data from the resulting AudioBuffers (see below). Even this logic isn't QUITE what you should do - this is still caching the encoded audio files, and decoding every time you hit the button. Instead, you should cache the decoded audio buffers, and just concatenate it.
function startStop(index, name, isPlaying) {
// Note we're decoding just the new sound
context.decodeAudioData( bufferList[index], function(buffer){
// We have a decoded buffer - now we need to concatenate it
audioBuffer = buffer;
if(!audioBuffer) {
audioBuffer = buffer;
}else{
audioBuffer = concatenateAudioBuffers(audioBuffer, buffer);
}
play();
})
}
function concatenateAudioBuffers(buffer1, buffer2) {
if (!buffer1 || !buffer2) {
console.log("no buffers!");
return null;
}
if (buffer1.numberOfChannels != buffer2.numberOfChannels) {
console.log("number of channels is not the same!");
return null;
}
if (buffer1.sampleRate != buffer2.sampleRate) {
console.log("sample rates don't match!");
return null;
}
var tmp = context.createBuffer(buffer1.numberOfChannels, buffer1.length + buffer2.length, buffer1.sampleRate);
for (var i=0; i<tmp.numberOfChannels; i++) {
var data = tmp.getChannelData(i);
data.set(buffer1.getChannelData(i));
data.set(buffer2.getChannelData(i),buffer1.length);
}
return tmp;
};
SOLVED:
To get multiple loops of the same duration playing at the same time and keep in sync even when you start and stop them randomly.
First, create all your buffer sources where bufferList is an array of AudioBuffers and the first sound is a sound you are going to read from and overwrite with your other sounds.
function createAllBufferSources() {
for (var i = 0; i < bufferList.length; i++) {
var source = context.createBufferSource();
source.buffer = bufferList[i];
source.loop = true;
bufferSources.push(source);
};
console.log(bufferSources)
}
Then:
function start() {
var rewrite = bufferSources[0];
rewrite.connect(context.destination);
var processNode = context.createScriptProcessor(2048, 2, 2);
rewrite.connect(processNode)
processNode.onaudioprocess = function(e) {
//getting the left and right of the sound we want to overwrite
var left = rewrite.buffer.getChannelData(0);
var right = rewrite.buffer.getChannelData(1);
var overL = [],
overR = [],
i, a, b, l;
l = bufferList.length,
//storing all the loops channel data
for (i = 0; i < l; i++) {
overL[i] = bufferList[i].getChannelData(0);
overR[i] = bufferList[i].getChannelData(1);
}
//looping through the channel data of the sound we are going to overwrite
a = 0, b = overL.length, l = left.length;
for (i = 0; i < l; i++) {
//making sure its a blank before we start to write
left[i] -= left[i];
right[i] -= right[i];
//looping through all the sounds we want to add and assigning the bytes to the old sound, both at the same position
for (a = 0; a < b; a++) {
left[i] += overL[a][i];
right[i] += overR[a][i];
}
left[i] /= b;
right[i] /= b);
}
};
processNode.connect(context.destination);
rewrite.start(0)
}
If you remove a AudioBuffer from bufferList and add it again at any point, it will always be in sync.
EDIT:
Keep in mind that:
-processor node gets garbage collected weirdly.
-This is very taxing, might want to think about using WebWorkers somehow

Categories

Resources