I'm a Javascript dilettante. I need to make a webpage for mobile viewing to deploy a dynamically created but ultimately linear audio piece. Essentially I would need to load a playlist, in which some tracks are fixed but others are randomly chosen from a larger pool; there also need to be timed pauses between some of the tracks. It would need only minimal controls, probably just play/pause.
I'm looking into Web Audio API and the basic HTML5 <audio> tag. My two main concerns for choosing between them are compatibility and simplicity of use.
On the compatibility point, I see that on the main page for the API itself it lists no support for Android, but on this more detailed rundown almost all browsers are listed as green. What's the best source to trust?
Assuming Web Audio API is viable for mobile deployment, do I need to use it? Would it make my life easier or is it just overpowered for my purposes? I see it has a handy onended event handler which I see myself using for queuing, and precise timing functions. It also seems to be more explicit about loading the files asynchronously with a callback function on success - I'd want to have a loading screen so that would be useful.
I'm a bit less clear on the capabilities of <audio>. I guess it must be able to do everything I want given HTML5 players have been built before Web Audio API came along - but is it more fiddly?
Web Audio works just fine on mobile.
Web Audio, in contrast to <audio>, breaks apart and give the developer precise control over the loading, decoding and playing of audio. If you need precise timing - like, beat-synching - of audio, you should probably use web audio. <audio> is pretty imprecise.
That said, a few caveats - as Web Audio by default uses in-memory buffers, it can use a lot more memory than <audio>, and it doesn't have native components to do streaming audio. The onended event is NOT the right way to do real chaining of audio, because it's a main-thread-Javascript callback (that is to say, any event handling like this might be delayed by other JS, garbage collection, etc. - and it might be off by 50 or 100 milliseconds). If you really care about timing, you have to plan ahead and use Web Audio scheduling. (This article I wrote describes this in more detail.)
Related
It might be a duplicate, but I didn't quite find the answer, so what is more efficient - using just a plain video element (if i dont need support for ie8, for example and all the fancy stuff), or using some plugins for video like video.js?
My main concern is load time - is there something that makes video files load faster when using video plugins?
With a plugin (be it Flash or a framework like video.js) you have the load/initialization time of the plugin itself to consider, though that will be largely minimal compared to the pre-load time of the video. Some Flash players have a fairly large .swf file though most (like most of the .js frameworks you'll come across) are pretty well optimized these days.
The largest chunk of time, with either, will be the pre-buffering to get to a state where the video can play through without further buffering. This will be impacted by the resolution, framerate etc of the video itself and optimizations like the positioning of the MOOV atom (if it's at the start of an .mp4 for instance then the browser doesn't have to read/cache the entire clip before it will start playing)
Flash (a plugin) may have better support for different types of media (for instance if you want to use a fMPEG format like HLS or DASH) but may restrict the devices/platforms you can run on. HTML5 and the <video> tag will - as you point out - only be available on more modern browsers and with or without a player framework to extend it offers a lot of flexibility (and capabilities are improving with time). This is a good overview of pros and cons.
I have a web program which makes use of Web Audio API. The issue here is that i want to make it compatible for IE. Is there any alternate for the Web Audio API, so that i can make the same code run on IE specifically?
What are your needs? If you need to do dynamic synthesis, audio routing, etc, you will only be able to achieve that with the Web Audio API, so your IE users are out of luck.
However, if all you need to do is play audio files, then I would recommend that you use howler.js. Howler has great compatibility across different browsers and operating systems, including various versions of IE.
Microsoft have been working on implementing the web audio API, it looks like a preview implementation is available. https://status.modern.ie/webaudioapi If you can wait until the next version of IE which I believe is expected to be released with Windows 10, which should be out this year IIRC, it may not be worth taking the time to implement an alternative.
That said, to answer your question, there's no way that I could find to generate audio with an oscillator in the web without the Web Audio API other than writing flash code, which has its own major disadvantages.
Minor note: you could possibly hack the html5 audio node to play back audio buffers that you generate in javascript code, but keeping everything in sync and preventing jumps in the waveform would be an awful task.
What I am doing is some kind of a video conference tool. I am doing some research on video processing recently and it seems straight forward by using the video element in combination with canvases. But, I am using WebRTC for video streaming to all connected clients, for which I need a MediaStream. So I am looking for a way to retrieve a MediaStream from the canvas element.
On my way, I found this project here called Whammy http://antimatter15.com/wp/2012/08/whammy-a-real-time-javascript-webm-encoder/ which creates a video file from a canvas, but as far as I understand, it is not made for live-streaming it.
One alternative approach would be to do the video processing on the remote client by sending him the stream and the information on how to process it. However this might work pretty well for few clients, but when it comes to multi-user conferences, I feel like it doesn't scale, since real-time video processing is still a calculation intense job. Every client would have to process all video streams from all connected clients.
For me it looks like a one-way-street. Getting video content into a canvas is pretty easy, the other way around is pretty hard. I thought, there might be a library for creating a MediaStream from a canvas element, but I found nothing. Any ideas on how to achieve that?
Best,
Felix
Since this question was posted, there has been not much, but a little progress on this front.
The MediaStream Recording standard allows for recording streams (such as WebRTC) into video file formats, using the MediaRecorder API.
It should work with recent Firefox as well as Chrome (video only) enabling the latter via the experimental flags ( chrome://flags -> Experimental Web Platform features).
Also see this resource for examples.
I'm working on a simple audio visualization application that uses a Web Audio API analyzer to pull frequency data, as in this example. Expectedly, the more visual elements I add to my canvases, the more latency there is between the audio and the yielded visual results.
Is there a standard approach to accounting for this latency? I can imagine a lookahead technique that buffers the upcoming audio data. I could work with synchronizing the JavaScript and Web Audio clocks, but I'm convinced that there's a much more intuitive answer. Perhaps it is as straightforward as playing the audio aloud with a slight delay (although this is not nearly as comprehensive).
The dancer.js library seems to have the same problem (always has a very subtle delay), whereas other applications seem to have solved the lag issue entirely. I have insofar been unable to pinpoint the technical difference. SoundJS seems to handle this a bit better, but it would be nice to build from scratch.
Any methodologies to point me in the right direction are much appreciated.
I think you will find some answers to precise audio timing in this article:
http://www.html5rocks.com/en/tutorials/audio/scheduling/
SoundJS uses this approach to enable smooth looping, but still uses javascript timers for delayed playback. This may not help you sync the audio timing with the animation timing. When I built the music visualizer example for SoundJS I found I had to play around with the different values for fft size and tick frequency to get good performance. I also needed to cache a single shape and reuse it with scaling to have performant graphics.
Hope that helps.
I'm concerned when you say the more visual elements you add to your canvases, the more latency you get in audio. That shouldn't really happen quite like that. Are your canvases being animated using requestAnimationFrame? What's your frame rate like?
You can't, technically speaking, synchronize the JS and Web Audio clocks - the Web Audio clock is the audio hardware clock, which is literally running off a different clock crystal than the system clock (on many systems, at least). The vast majority of web audio (ScriptProcessorNodes being the major exception) shouldn't have additional latency introduced when your main UI thread becomes a bit more congested).
If the problem is the analysis just seems to lag (i.e. the visuals are consistently "behind" the audio), it could just be the inherent lag in the FFT processing. You can reduce the FFT size in the Analyser, although you'll get less definition then; to fake up fixing it, you can also run all the audio through a delay node to get it to re-sync with the visuals.
Also, you may find that the "smoothing" parameter on the Analyser makes it less time-precise - try turning that down.
I am now using an embedded Windows Media Player (tutorial) and I can manipulate with time slider through Javascript. But then I discovered the Jlembed plugin for jQuery and thought it solves all my problems because of the different platforms, etc.
But after I spent a few hours in setting things up I relized that it does not have functions like setCurrentTime or getCurrentTime and these functions are most important for my type of project.
Is there a way to achieve this functionallity with Jlembed?
As far as I know, Windows Media Player does not support the type of interaction you're trying to achieve with it's embedded player. I did not include any javascript functions for Windows Media Player because it is not necessary. jlEmbed does not affect your ability to control the embedded player with javascript. So, if Windows Media Player supports a particular script, jlEmbed supports it also.
If there is a javascript API for WMP please point me in that direction and I will add better support for it, but I don't think it exists. However, if one does exist, jlEmbed will not prevent you from scripting as would normally be done.
I spent a great deal of time on the YouTube support, but only because I had to. Otherwise, it would have been much more difficult to control the YouTube player, which is the most popular and widely available media player on the web.
Only a small percentage of users will actually be able to use your embedded WMP presentation. The YouTube player is compatible with any browser that supports Flash. You would be better off creating a custom 'chromeless' YouTube player than using WMP for your presentation. An even better alternative would be to use Flash to make your video presentation.
According to the documentation, the following functions exist that might help:
jlembed_seekTo(playerId, seconds, allowSeekAhead)
jlembed_getCurrentTime(playerId)
Hope that helps!