I am making a video editing tool, where the user loads a local video into the application and edits it. For this I have to extract audio from the local file.
Currently I am loading the video file through a XMLHttpRequest which gives a arraybuffer as output. From this arraybuffer using decodeAudioData from audioContext Object I am getting AudioBuffer, which is used to paint the canvas.
let audioContext = new (window.AudioContext || window.webkitAudioContext)();
var req = new XMLHttpRequest();
req.open('GET', this.props.videoFileURL, true);
req.responseType = 'arraybuffer';
req.onload = e => {
audioContext.decodeAudioData(
req.response,
buffer => {
this.currentBuffer = buffer;
this.props.setAudioBuffer(buffer);
requestAnimationFrame(this.updateCanvas);
},
this.onDecodeError
);
console.log(req.response);
};
req.send();
This is working for most mp4 files but I am getting decodeError when I test with MPEG-1/2 encoded video files
Edit 1 :
I understand this is a demux issue, I am not able to find demuxer for mpeg-1
Related
I'm trying to optimize the loading times of audio files in a project where we need to use AudioBufferSourceNode. It requires audio buffer to be loaded..
but can it be possible that i can load say first 10 mins of audio first, and play it while download other part in background. And later create another source node which loads with second part of audio file.
My current implementation loads all of the audio first. Which isn't great as it takes time. My files are 60-70 MB long.
function getData() {
source = audioCtx.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', 'viper.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
},
function(e){ console.log("Error with decoding audio data" + e.err); });
}
request.send();
}
I think you can achieve what you want by using the WebCodecs API (which is currently only available in Chrome) but it requires some plumbing.
To get the file as a stream you could use fetch() instead of XMLHttpRequest.
Then you would need to demux the encoded file to get the raw audio data to decode it with an AudioDecoder. With a bit of luck it will output AudioData objects. These objects can be used to get the raw sample data which can then be used to create an AudioBuffer.
There are not many WebCodecs examples available yet. I think the example which shows how to decode an MP4 is the most similar to your use case available so far.
Task.
The goal is to offload data from Web Audio API presented as a produced spectrogram, in a .csv array by dividing by 10 parts (that is by 10% of area along vertical axis - this is frequency), then load data from array into StreamGraph, see pic-1
Source codes:
https://codepen.io/lolliuym/pen/GREjwNR?editors=1010
Tries.
1 - Define online and offline audio context
const AudioContext = window.AudioContext || window.webkitAudioContext;
const audioCtx = new AudioContext();
const offlineCtx = new OfflineAudioContext(2,44100*40,44100);
source = offlineCtx.createBufferSource();
2 - Use XHR to load the audio track and decodeAudioData to decode it and insert the data into the buffer. Then place the buffer in the source. Unfortunately, the Web Audio API does not provide an option to create a new .csv file. Example here:
function getData() {
request = new XMLHttpRequest();
request.open('GET', 'audio.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
let audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
myBuffer = buffer;
source.buffer = myBuffer;
source.connect(offlineCtx.destination);
source.start();
offlineCtx.startRendering();
},
function(e){"Error with decoding audio data" + e.err})}
request.send()}
I found a way to create a .csv using existing.csv (defaults to NULL), but I think there is no way compatibility can affect Web Audio API.
Is there any other way?
Total.
In all my attempts, I have failed to create the contents of a .csv file through which to offload data from the Web Audio API to a graph that could play in a browser.
Questions.
What is the ability to create a new .csv file by dividing 10 by parts of an area?
What method can be used to unload?
How can Streamgraph schedule connect to .csv
I'm trying to build a web app that would record the audio from browser and send the recorded audio to django API after every 3 seconds for analysis(emotion recognition from voice). I'm using MediaRecorder for recording audio. But only noise is saved in the wave file.
I'm trying to send the recorded audio(as a blob) to the django api. And then on receiving it at backend, I save it as a wav file.
I'm sending the recorded audio like this:
navigator.mediaDevices.getUserMedia({audio:true}).then(stream => {audio_handler(stream)});
var audio_chunks = [];
audio_handler = function(stream){
rec = new MediaRecorder(stream, {mimeType : 'audio/webm', codecs : "opus"});
rec.ondataavailable = function(e){
audio_chunks.push(e.data);
}
}
//on rec.stop()
var blob = new Blob(audio_chunks, {'type':'audio/wav; codecs=opus'});
console.log(blob);
var xhttp = new XMLHttpRequest();
xhttp.open("POST", "http://localhost:8000/er/", true);
var data = new FormData();
data.append('data', blob, 'audio_blob');
xhttp.send(data);
xhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
console.log(this.responseText);
}
};
Saving on django backend as:
from django.http import JsonResponse
import wave
def get_emotion(request):
print(request.FILES.get('data'))
audio_data = request.FILES.get('data')
print(type(audio_data))
print(audio_data.size)
audio = wave.open('test.wav', 'wb')
audio.setnchannels(1)
audio.setnframes(1)
audio.setsampwidth(1)
audio.setframerate(16000)
blob = audio_data.read()
audio.writeframes(blob) #on playing 'test.wav' only noise can be heard
return JsonResponse({})
Currently the audio file saved just has some noise in it, whereas I expect the wave audio file saved to have same content as the audio spoken while recording.
Please suggest, if there is any other way to do the same thing(record an audio from browser and send it to a django api, to save it as an audio file there).
If any more information is needed, feel free to ask. Thank you!
Wav file format don't support Opus codec.
For Opus codec you need to use webm file format.
So you need to change this
new Blob(audio_chunks, {'type':'audio/wav; codecs=opus'});
to
new Blob(audio_chunks, {'type':'audio/webm; codecs=opus'});
or
new Blob(chunks, { 'type' : 'audio/wav; codecs=MS_PCM' }); //if it is supported.
Make sure the file that you are saving the blob is also of the same file format as send.
I faced the same issue. I advise you to set the original arguments to audio while saving, instead of hard-coding random figures:
obj = wave.open(audio_data, 'r')
audio = wave.open('/../test.wav', 'wb')
audio.setnchannels(obj.getnchannels())
audio.setnframes(obj.getnframes())
audio.setsampwidth(obj.getsampwidth())
audio.setframerate(obj.getframerate())
blob = audio_data.read()
audio.writeframes(blob)
This would set the actual channel, frames, width etc to the audio you are writing without any introduction of noise in you .wav file . Make sure you are using Django==1.8.19 at least
I'm using XMLHttpRequest to load an audio, and Web Audio Api to get the frequency of that audio to use for a visualizer. I am new to the concept, but what I've learned is that the second parameter "url" in XMLHttpRequest's open method must be in the same domain as the current document, a relative url (e.g audio-folder/music.mp3).
I want to open an audio from a third party website outside of the database (https://c1.rbxcdn.com/36362fe5a1eab0c46a9b23cf4b54889e), but of course, returns an error.
I assume the way is to save the audio from the base url into the database so the XMLHttpRequest can be sent, and then remove it once the audio's frequency has been calculated. But how exactly can I do this? I'm not sure where to start, or how efficient this is, so if you have advice then I'd love to hear it.
Here is the code I'm working with.
function playSample() {
var request = new XMLHttpRequest();
request.open('GET', 'example.mp3', true);
request.responseType = 'arraybuffer';
// When loaded decode the data
request.onload = function() {
// decode the data
setupAudioNodes();
context.decodeAudioData(request.response, function(buffer) {
// when the audio is decoded play the sound
sourceNode.buffer = buffer;
sourceNode.start(0);
rafID = window.requestAnimationFrame(updateVisualization);
}, function(e) {
console.log(e);
});
};
request.send();
}
I have a web-server that streams wav audio and I would like to play it in a web browser using the javascript audio API.
Here is my code :
function start() {
var request = new XMLHttpRequest();
request.open("GET", url, true);
request.responseType = "arraybuffer"; // Read as binary data
request.onload = function() {
var data = request.response;
playSound(data);
};
}
The problem here is that onload wont be called until the data is completely loaded which is not very convenient for streaming.
So I looked for another event and found onProgress but the problem is that request.response returns null if the data is not completely loaded.
Whats the correct way to play an audio stream with javascript ?
Thank's for your help.
Lukas