I'm trying to rewrite some (very simple) android code I found written in Java into a static HTML5 app (I don't need a server to do anything, I'd like to keep it that way). I have extensive background in web development, but basic understanding of Java, and even less knowledge in Android development.
The only function of the app is to take some numbers and convert them into an audio chirp from bytes. I have absolutely no problem translating the mathematical logic into JS. Where I'm having trouble is when it gets to actually producing the sound. This is the relevant parts of the original code:
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioTrack;
// later in the code:
AudioTrack track = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, minBufferSize, AudioTrack.MODE_STATIC);
// some math, and then:
track.write(sound, 0, sound.length); // sound is an array of bytes
How do I do this in JS? I can use a dataURI to produce the sound from the bytes, but does that allow me to control the other information here (i.e., sample rate, etc.)? In other words: What's the simplest, most accurate way to do this in JS?
update
I have been trying to replicate what I found in this answer. This is the relevant part of my code:
window.onload = init;
var context; // Audio context
var buf; // Audio buffer
function init() {
if (!window.AudioContext) {
if (!window.webkitAudioContext) {
alert("Your browser does not support any AudioContext and cannot play back this audio.");
return;
}
window.AudioContext = window.webkitAudioContext;
}
context = new AudioContext();
}
function playByteArray( bytes ) {
var buffer = new Uint8Array( bytes.length );
buffer.set( new Uint8Array(bytes), 0 );
context.decodeAudioData(buffer.buffer, play);
}
function play( audioBuffer ) {
var source = context.createBufferSource();
source.buffer = audioBuffer;
source.connect( context.destination );
source.start(0);
}
However, when I run this I get this error:
Uncaught (in promise) DOMException: Unable to decode audio data
Which I find quite extraordinary, as it's such a general error it manages to beautifully tell me exactly squat about what is wrong. Even more surprising, when I debugged this step by step, even though the chain of the errors starts (expectedly) with the line context.decodeAudioData(buffer.buffer, play); it actually runs into a few more lines within the jQuery file (3.2.1, uncompressed), going through lines 5208, 5195, 5191, 5219, 5223 and lastly 5015 before erroring out. I have no clue why jQuery has anything to do with it, and the error gives me no idea what to try. Any ideas?
If bytes is an ArrayBuffer it is not necessary to create a Uint8Array. You can pass ArrayBuffer bytes as parameter to AudioContext.decodeAudioData() which returns a Promise, chain .then() to .decodeAudioData(), call with play function as parameter.
At javascript at stacksnippets, <input type="file"> element is used to accept upload of audio file, FileReader.prototype.readAsArrayBuffer() creates ArrayBuffer from File object, which is passed to playByteArray.
window.onload = init;
var context; // Audio context
var buf; // Audio buffer
var reader = new FileReader(); // to create `ArrayBuffer` from `File`
function init() {
if (!window.AudioContext) {
if (!window.webkitAudioContext) {
alert("Your browser does not support any AudioContext and cannot play back this audio.");
return;
}
window.AudioContext = window.webkitAudioContext;
}
context = new AudioContext();
}
function handleFile(file) {
console.log(file);
reader.onload = function() {
console.log(reader.result instanceof ArrayBuffer);
playByteArray(reader.result); // pass `ArrayBuffer` to `playByteArray`
}
reader.readAsArrayBuffer(file);
};
function playByteArray(bytes) {
context.decodeAudioData(bytes)
.then(play)
.catch(function(err) {
console.error(err);
});
}
function play(audioBuffer) {
var source = context.createBufferSource();
source.buffer = audioBuffer;
source.connect(context.destination);
source.start(0);
}
<input type="file" accepts="audio/*" onchange="handleFile(this.files[0])" />
I solved it myself. I read more into the MDN docs explaining AudioBuffer and realized two important things:
I didn't need to decodeAudioData (since I'm creating the data myself, there's nothing to decode). I actually took that bit from the answer I was replicating and it retrospect, it was entirely needless.
Since I'm working with a 16 Bit PCM stereo, that meant I needed to use the Float32Array (2 Channels, each 16 Bit).
Granted, I still had a problem with some of my calculations that resulted in a distorted sound, but as far as producing the sound itself, I ended up doing this really simple solution:
function playBytes(bytes) {
var floats = new Float32Array(bytes.length);
bytes.forEach(function( sample, i ) {
floats[i] = sample / 32767;
});
var buffer = context.createBuffer(1, floats.length, 48000),
source = context.createBufferSource();
buffer.getChannelData(0).set(floats);
source.buffer = buffer;
source.connect(context.destination);
source.start(0);
}
I can probably optimize it a bit further - the 32767 part should happen before this, in the part where I'm calculating the data, for example. Also, I'm creating a Float32Array with two channels, then outputting one of them cause I really don't need both. I couldn't figure out if there's a way to create one channel mono file with Int16Array, or if that's even necessary\better.
Anyway, that's essentially it. It's really just the most basic solution, with some minimal understanding on my part of how to handle my data correctly. Hope this helps anyone out there.
Related
I want to play the AudioBuffer that I have gotten from AudioContext.decodeAudioData() with AudioWorklet. I'm currently able to play decoded audio buffer with AudioBufferSourceNode but as you know this method will execute the task on the main thread which is not what I want, the thing I want is to play audio in the background which seems it's only possible to do with workers. but workers can't access the Web Audio Api. so the only way is AudioWorklet
setup worklet :
var audioContext = new AudioContext()
await audioContext.audioWorklet.addModule("./playing-audio-processor.js");
PlayingAudioProcessor= new AudioWorkletNode(
audioContext,
"playing-audio-processor"
);
PlayingAudioProcessor.connect(audioContext.destination);
audioContext.resume();
decoding and sending it to the worklet (I'm sure that the passed audioBuffer does not have any problem and can be easily played with AudioBufferSourceNode)
let ctx = new AudioContext();
ctx.decodeAudioData(new Uint8Array(audioData).buffer, (audioBuffer) => {
//set `audioData` of worklet to a float32array
myAudioWorklet.port.postMessage(audioBuffer.getChannelData(0))
})
the length of passed audio data array (audioBuffer.getChannelData(0)) is 960 which is greater than the length of outputs[0][0] so I splitted it (actully it doesn't seem to be a good idea and I think this is why I have not expected audio output)
class PlayingAudioProcessor extends AudioWorkletProcessor {
audioData = []
constructor() {
super();
//set listener to receive audio data
this.port.onmessage = (data) => {
this.audioData = data.data
}
}
process(inputs, outputs, parameters) {
//playing each 128 floats of 960 floats
for (let i = 0; i < this.audioData.length / 128; i++) {
for (let b = 0; b < 128; b++) {
if ((i * 128) + b <= this.audioData.length) {
outputs[0][0][b] = this.audioData[(i * 128) + b];
}
}
}
return true;
}
}
registerProcessor("playing-audio-processor", PlayingAudioProcessor);
the problem is now that the audio result is nothing but a meaningless noisy sound that depends on the loudness of the input data.
I really need to solve this, please put anything that might be helpful for me.
thank you.
It looks like you're writing all the samples within a single process() call. You would instead need to write only 128 samples per process() call to achieve the desired result.
The first invocation would need to write sample 1 to 128 out of your AudioBuffer, the second invocation would need to write sample 129 to 256, and so on...
AudioBufferSourceNode doesn't play audio on the main thread. None of the AudioNode objects do that (except for ScriptProcessorNode, which is deprecated). All audio processing and playback for the Web Audio API is performed inside a separate web audio thread. Only the parts of the audio nodes that send control messages run on the main thread. By which I mean that a brief message gets sent between threads when you call a method like start() or setValueAtTime(), etc.
https://www.w3.org/TR/webaudio/#control-thread-and-rendering-thread
in the project I'm working on, we have about 30 audio tracks where we apply filters and play the audio back. Originally this was done server-side, and returned a base64 string for each track, which I then loaded with new Audio().
This worked well if you had fast internet speeds, but on slow speeds, it could take up to an hour for the tracks to be returned from the server, so now we're applying the filters client-side.
Applying the filters is no problem, but I'm trying not to rewrite my entire playback algorithm (it's much more involved than just pause, play, stop) and am wondering If I can encode an AudioContext to Base64.
I've tried creating a new Audio and passing the AudioContext, creating a new Audio and passing the AudioBuffer and something based on this example. But none if it works and I cant find any examples of what I'm trying to do on the internet.
If someone could take a look at my code and help me out, I'd greatly appreciate it. Thanks in advance!
var audioCtx = new AudioContext();
var source = audioCtx.createBufferSource();
var request = new XMLHttpRequest();
request.open("GET", "/path/to/audio", true);
request.responseType = "arraybuffer";
request.onload = function () {
audioCtx.decodeAudioData(request.response, function (buffer) {
source.buffer = buffer;
// Apply filters to the audio
// Here I would like to convert the audio to Base64
callback(source);
}, function (error) {
console.error("decodeAudioData error", error);
});
};
request.send();
It's a bit hard to know exactly what you want from the snippet you give, but based on the snippet, you might be able to use an OfflineAudioContext if you know how long your audio files are. The offline context will return an AudioBuffer which you can then use to get a base64-encoded audio result.
I am trying to do the following:
On the server I encode h264 packets into Webm (MKV) container structure, so that each cluster gets a single frame packet.Only the first data chunk is different as it contains something called Initialization Segment.Here it is explained quite well.
Then I stream those clusters one by one in a binary stream via WebSocket to a broweser, which is Chrome.
It probably sounds weird that I use h264 codec and not VP8 or VP9, which are native codec for Webm Video Format. But it appears that html video tag has no problem to play this sort of video container. If I just write the whole stream to a file and pass it to video.src, it is played fine. But I want to stream it in real-time.That's why I am breaking the video into chunks and sending them over websocket.
On the client, I am using MediaSource API. I have little experience in Web technologies, but I found that's probably the only way to go in my case.
And it doesn't work.I am getting no errors, the streams runs ok, and the video object emits no warning or errors (checking via developer console).
The client side code looks like this:
<script>
$(document).ready(function () {
var sourceBuffer;
var player = document.getElementById("video1");
var mediaSource = new MediaSource();
player.src = URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', sourceOpen);
//array with incoming segments:
var mediaSegments = [];
var ws = new WebSocket("ws://localhost:8080/echo");
ws.binaryType = "arraybuffer";
player.addEventListener("error", function (err) {
$("#id1").append("video error "+ err.error + "\n");
}, false);
player.addEventListener("playing", function () {
$("#id1").append("playing\n");
}, false);
player.addEventListener("progress",onProgress);
ws.onopen = function () {
$("#id1").append("Socket opened\n");
};
function sourceOpen()
{
sourceBuffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.64001E"');
}
function onUpdateEnd()
{
if (!mediaSegments.length)
{
return;
}
sourceBuffer.appendBuffer(mediaSegments.shift());
}
var initSegment = true;
ws.onmessage = function (evt) {
if (evt.data instanceof ArrayBuffer) {
var buffer = evt.data;
//the first segment is always 'initSegment'
//it must be appended to the buffer first
if(initSegment == true)
{
sourceBuffer.appendBuffer(buffer);
sourceBuffer.addEventListener('updateend', onUpdateEnd);
initSegment = false;
}
else
{
mediaSegments.push(buffer);
}
}
};
});
I also tried different profile codes for MIME type,even though I know that my codec is "high profile.I tried the following profiles:
avc1.42E01E baseline
avc1.58A01E extended profile
avc1.4D401E main profile
avc1.64001E high profile
In some examples I found from 2-3 years ago, I have seen developers using type= "video/x-matroska", but probably alot changed since then,because now even video.src doesn't handle this sort of MIME.
Additionally, in order to make sure the chunks I am sending through the stream are not corrupted, I opened a local streaming session in VLC player and it played it progressively with no issues.
The only thing I suspect that the MediaCodec doesn't know how to handle this sort of hybrid container.And I wonder then why video object plays such a video ok.Am I missing something in my client side code? Or MediacCodec API indeed doesn't support this type of media?
PS: For those curious why I am using MKV container and not MPEG DASH, for example. The answer is - container simplicity, data writing speed and size. EBML structures are very compact and easy to write in real time.
Trying to follow the example here, which is basically a c&p of this
Think I got most of the parts down, except all the node.connect()'s
From what I understand, this sequence of code is needed to provide the audio analyzer with an audio stream:
var source = audioCtx.createMediaStreamSource(stream);
source.connect(analyser);
analyser.connect(audioCtx.destination);
I can't seem to make sense of it as it looks rather ouroboros-y to me.
And unfortunately, I can't seem to find any documentation on .connect() so quite lost and would appreciate any clarification!
Oh and I'm loading an .mp3 via pure javascript new Audio('db.mp3').play(); and am trying to use that as the source without creating an <audio> element.
Can a mediaStream object be created from this to feed into .createMediaStreamSource(stream)?
connect simply defines the output for the filters.
In this case, your source loads the stream into the buffer and writes to the input of the next filter which is defined by the connect function. This is repeated for your analyser filter.
Think of it as pipes.
here is a sample code snippet that I have written a few years back using web audio api.
this.scriptProcessor = this.audioContext.createScriptProcessor(this.scriptProcessorBufferSize,
this.scriptProcessorInputChannels,
this.scriptProcessorOutputChannels);
this.scriptProcessor.connect(this.audioContext.destination);
this.scriptProcessor.onaudioprocess = updateMediaControl.bind(this);
//Set up the Gain Node with a default value of 1(max volume).
this.gainNode = this.audioContext.createGain();
this.gainNode.connect(this.audioContext.destination);
this.gainNode.gain.value = 1;
sewi.AudioResourceViewer.prototype.playAudio = function(){
if(this.audioBuffer){
this.source = this.audioContext.createBufferSource();
this.source.buffer = this.audioBuffer;
this.source.connect(this.gainNode);
this.source.connect(this.scriptProcessor);
this.beginTime = Date.now();
this.source.start(0, this.offset);
this.isPlaying = true;
this.controls.update({playing: this.isPlaying});
updateGraphPlaybackPosition.call(this, this.offset);
}
};
So as you can see that my source is connected to a gainNode, which is connected to a scriptProcessor. When the audio starts playing, the data is passed from the source->gainNode->destination and source->scriptProcessor->destination. flowing through the "pipes" that connects them, which is defined by connect(). When the audio data pass through the gainNode, volume can be adjusted by changing the amplitude of the audio wave. After that it is passed to the script processor so that events can be attached and triggered while the audio is being processed.
I've been trying to create polyphonic WAV playback with node.js on raspberry pi 3 running latest raspbian:
shelling out to aplay/mpg123/some other program - allows me to only play single sound at once
I tried combination of https://github.com/sebpiq/node-web-audio-api and https://github.com/TooTallNate/node-speaker (sample code below) but audio quality is very low, with a lot of distortions
Is there anything I'm missing here? I know I could easily do it in another programming language (I was able to write C++ code with SDL, and Python with pygame), but the question is if it's possible with node.js :)
Here's my current web-audio-api + node-speaker code:
var AudioContext = require('web-audio-api').AudioContext;
var Speaker = require('speaker');
var fs = require('fs');
var track1 = './tracks/1.wav';
var track2 = './tracks/1.wav';
var context = new AudioContext();
context.outStream = new Speaker({
channels: context.format.numberOfChannels,
bitDepth: context.format.bitDepth,
sampleRate: context.format.sampleRate
});
function play(audioBuffer) {
if (!audioBuffer) { return; }
var bufferSource = context.createBufferSource();
bufferSource.connect(context.destination);
bufferSource.buffer = audioBuffer;
bufferSource.loop = false;
bufferSource.start(0);
}
var audioData1 = fs.readFileSync(track1);
var audioData2 = fs.readFileSync(track2);
var audioBuffer1, audioBuffer2;
context.decodeAudioData(audioData1, function(audioBuffer) {
audioBuffer1 = audioBuffer;
if (audioBuffer1 && audioBuffer2) { playBoth(); }
});
context.decodeAudioData(audioData2, function(audioBuffer) {
audioBuffer2 = audioBuffer;
if (audioBuffer1 && audioBuffer2) { playBoth(); }
});
function playBoth() {
console.log('playing...');
play(audioBuffer1);
play(audioBuffer2);
}
audio quality is very low, with a lot of distortions
According to the WebAudio spec (https://webaudio.github.io/web-audio-api/#SummingJunction):
No clipping is applied at the inputs or outputs of the AudioNode to allow a maximum of dynamic range within the audio graph.
Now if you're playing two audio streams, it's possible that summing them results in a value that's beyond the acceptable range, which sounds like - distortions.
Try lowering the volume of each audio stream by first piping them through a GainNode as so:
function play(audioBuffer) {
if (!audioBuffer) { return; }
var bufferSource = context.createBufferSource();
var gainNode = context.createGain();
gainNode.gain.value = 0.5 // for instance, find a good value
bufferSource.connect(gainNode);
gainNode.connect(context.destination);
bufferSource.buffer = audioBuffer;
bufferSource.loop = false;
bufferSource.start(0);
}
Alternatively, you could use a DynamicsCompressorNode, but manually setting the gain gives you more control over the output.
This isn't exactly answer-worthy but I can't post comments at the moment ><
I had a similar problem with an app made using js audio api and the, rather easy fix, was lowering the quality of the audio and changing the format.
In your case what I could think of is setting the bit depth&sampling frequency as low as possible without affecting the listener's experience (e.g. 44.1kHz and 16 bit depth).
You might also try changing the format, wav, in theory, should be quite good at the job of not being CPU intensive, however, there are other uncompressed formats (e.g. .aiff)
You may try using multiple cores of the pi:
https://nodejs.org/api/cluster.html
Although this may prove a bit complicated, if you are doing the audio-streaming in parallel with other unrelated processes, you could try moving the audio on a separate CPU.
An (easy) thing you could try would be running node with more RAM, although, in your case, I doubt that I possible.
The biggest problem, however, might be the code, sadly enough I am not experienced with the modules you are using and as such can give to real advice on that (hence, why I said this is not answer worthy :p)
when you create Speaker instant, set parameter like this
channels = 1 // you can try with 1 or 2 and get the best quantity
bitDepth = 16
sampleRate = 48000 // normally 44100 for speaking and higher for music playing
You can spawn from node 2 aplay processes each playing one file. Use detached: true to allow node to continue running.