Position dependent audio playback for multiple audio files - javascript

I've been designing an html/js based media player whose goal is to give the user some simple playback and region movement functionality found in common DAW's. My ui is comprised of 5 individual tracks and playback controls. By using wavesurfer.js I've been able to successfully create any number of wavesurfer instances/regions inside these tracks, drag and position them anywhere I'd like, and play selected audio files.
In my experimentation with wavesurfer.js, I've found that each instance has its own playhead which indicates the current playback position of the selected audio file and allows the user to navigate playback within that instance.
My issue is that I would also like to have a single "master" playhead which is not contained within any particular instance and am unsure how to approach this.
While there are complex web DAW's out there, I haven't found a source that really helps me understand how to handle audio playback in the way I need..but I know it's been done before. I've read through the webaudio API and wavesurfer.js documentation and unless there's something I'm missing within wavesurfer's capabilities, I'm assuming I need to work with webaudio API to achieve this result - would I start by defining a new AudioContext object comprised of all of the present audio files as AudioNodes?
With multiple audio files/wavesurfer instances whose playback must be dependent on their user-determined positions within the context of the whole track workspace, how would I approach handling audio playback?
Thanks for any insight, I appreciate it.

If you work with for example 5 trackouts, you need to be aware that if you load them into audiobuffers (or audionodes like you call em) they will actually become pretty big concerning the RAM amount if they are longer. So perhaps you can reach your goal if you use five audio elements and set their state and position, though these audio elements can not that easily be accessed by filters and effects and such (Convolver etc.). It really depends on your setup and a prototype shouldn't be too hard to program, so if you are really passionate about it you can make both.
At my website 1ln.de you can load mp3 files (different) in the audio positions and RAM gets up to 30 % of 8 GB if you load 5 mp3 in memory. But RAM is much faster then streaming audio. Because the audio elements need to buffer their contents first and you can't handle and manipulate them as easy as audio buffers.
It really depends on the comlexity of your tool and what you want to achieve.
For professional usage and fast playback i would recommend you playing with audio buffers. It will take some time to decode the audio files first though. But you can handle when buffering is finished.
Hopefully I got your question right.
If you want to program a DAW feel free to say hi for some help.

Related

Audio stream modulation in a browser

I am trying to create a calling app that uses WebRTC and a feature I want to add is audio obfuscation.
I want the ability to change the audio pitch that I am sending either at the source audio or even at the receiving end.
I have tried various famous libraries like P5.js but still was unable to get the result.
Need some suggestions/sample code in which I can modulate real time audio or even simulate the feeling that I am modulating the real time pitch of the audio.
I would prefer javascript as it will be rendered on the client but I am even open to using some alternative like WebAssembly if that will help me do the job.
Thanks in advance!

Is there a way to feed multiple audio interfaces with Web Audio?

I am currently thinking about how to realise an app that feeds multiple audio interfaces with different sounds. For example if I have a second sound card at disposal.
As far as I researched the AudioContext of Web Audio only feeds a single destination. I haven't seen a way to select the actual destination hardware.
Can anyone think of a way to work around this?
Nope. It'll use whatever your system default is.
You can output audio from an audio/video element to a specified output device using setSinkid()
videoElement.setSinkId(deviceID).then().catch()

Custom progressive audio streaming in browser

Say i like to create my very own progressive streaming mechanicsm in Javascript because i'm finding the browser's built in streaming mechanism not fault-tollerant enough or i like to implement my own custom method over WebSocket. I would like to create a buffer which holds the already downloaded segments of a continous media file (say an arraybuffer or something like that). Is it possible to play this file even if it's not already downloaded from start-to-end?
My only idea was the Web Audio API which has a noteOn() function for preceisely timing the start of each segment. However i don't know how gapeless this would be. Also it introduces the problem that i have to know exactly where audio files can be cut safely on the server side so the next part can be decoded without any loss and gaps. E.g. mp3's bit reservoir stores audio data in neighbour audio frames even in CBR mode which makes things difficult.
What about creating a ScriptProcessorNode that feeds from your incoming buffers? The biggest issue is making sure that the segments are convertible to raw audio samples, but otherwise you could write a simple function in the onaudioprocess event handler that pulls in the next available buffer chunk and copies it into the node's output buffers. Since this would be a pull-on-demand mechanism, you wouldn't need to worry about timing segment playback.

Get Relative Loudness of Song, Javascript

I'm trying to build an mp3 player for my site using JavaScript (and any plugins/frameworks(jQuery)/libraries that are relevant) & html5. So I built the player (more accurately, I implemented jPlayer), and now I want to make a visualizer.
Ok maybe it's not a visualizer (all the names for ways to visualize sound always confused me), I guess what I want is something like this:
(source: anthonymattox.com)
Or just something that graphs the amplitude (loudness) of an MP3.
So to start, does anyone know an API that can do this?
If you don't that's ok; I guess I'll build my own. For which I need to know:
Does anybody know a way to get the amplitude/loudness of an mp3 at any given point using JavaScript?
EDIT
Changed to a question about php: Visualization of MP3 - PHP
You would need to be able to decode the MP3 yourself. The html5 audio element, and the browsers's implementations of it, don't expose this sort of data. For example, look at Firefox's exposed methods for JavaScript. The closest thing to what you want is the "volumechange" event. But that is in reference to the volume mixer on the browser's rendered control (i.e. output volume). It has nothing to do with the actual dB of the audio source.
I imagine that the only feasible way to do this is to render your waveform to a graphic ahead of time, and then "reveal" it as the song plays (e.g. with the "timeupdate" event).

Waveform visualization in JavaScript from audio [duplicate]

This question already has answers here:
How to write a web-based music visualizer?
(4 answers)
Closed 5 years ago.
I'm trying to use JavaScript to display the waveform for and audio file, but I don't even know how to get started. I found the Audio Data API, but am unfamiliar with most audio terms and don't really know what is provided or how to manipulate it. I found examples of waveforms in JavaScript, but they are too complicated/I can't comprehend what is going on. Then my question is: how can you use JavaScript to create a waveform of a song on canvas, and what exactly is the process behind it?
Here's some sample code from my book (HTML5 Multimedia: Develop and Design) that does exactly that; Audio Waveform. It uses the Mozilla Audio Data API.
The code simply takes snapshots of the audio data and uses it to draw on the canvas.
Here's an article from the BBC's R&D team showing how they did exactly that to build a couple of JS libraries and more besides. The results all seem to be openly available and rather good.
Rather than use the Audio Data API, which you cannot be sure is supported by all your users' browsers, it might be better if you generate your waveform data server-side (the BBC team created a C++ app to do that) and then at least you are decoupling the client-side display aspect from the playback aspect. Also, bear in mind that the entire audio file has to reach the browser before you can calculate peaks and render a waveform. I am not sure if streaming files (eg MP3) can be used to calculate peaks as the file is coming in. But overall it is surely better to calculate your peaks once, server-side, then simply send the data via JSON (or even create + cache your graphics server-side - there are numerous PHP chart libraries or you can do it natively with GD).
For playback on the browser, there are several good (non-Flash!) options. Personally I like SoundManager 2 as the code is completely decoupled from display, meaning that I am free to create whatever UI / display that I like (or that the client wants). I have found it robust and reliable although I had some initial difficulty on one project with multiple players on the same page. The examples on their site are not great (imho) but with imagination you can do some cool things. SM2 also has an optional Flash fallback option for antique browsers.
I did just that with the web audio api and I used a project called wavesurfer.
http://www.html5audio.org/2012/10/interactive-navigable-audio-visualization-using-webaudio-api-and-canvas.html
What it does is, it draws tiny rectangles and uses an audio buffer to determine the height of each rectangle. Also possible in wavesurfer is playing and pausing using space bar and clicking on the wave to start playing at that point.
Update: This POC website no longer exists.
To check out what I made go to this site:
Update: This POC website no longer exists.
This only works in a google chrome browser and maybe safari but I'm not sure about that.
Let me know if you want more info.
Not well supported yet but take a look at this Firefox tone generator.

Categories

Resources