Changing quality of MediaRecorder and canvas.captureStream? - javascript

I've recently been trying to generating video in the browser, and have thus been playing with two approaches:
Using the whammy js library to combine webp frames into webm video. More details here.
Using MediaRecorder and canvas.captureStream. More details here.
The whammy approach works well, but is only supported in Chrome, since it's the only browser that currently supports webp encoding (canvas.toDataURL("image/webp")). And so I'm using the captureStream approach as a backup for Firefox (and using libwebpjs for Safari).
So now on to my question: Is there a way to control the video quality of the canvas stream? And if not, has something like this been considered by the browsers / w3c?
Here's a screenshot of one of the frames of the video generated by whammy:
And here's the same frame generated by the MediaRecorder/canvas.captureStream approach:
My first thought is to artificially increase the resolution of the canvas that I'm streaming, but I don't want the output video to be bigger.
I've tried increasing the frame rate passed to the captureStream method (thinking that there may be some strange frame interpolation stuff happening), but this doesn't help. It actually degrades quality if I make it too high. My current theory is that the browser decides on the quality of the stream based on how much computational power it has access to. This makes sense, because if it's going to keep up with the frame rate that I've specified, then something has to give.
So the next thought is that I should slow down the rate at which I'm feeding the canvas with images, and then proportionally lower the FPS value that I pass into captureStream, but the problem with that is that even though I'd likely have solved the quality problem, I'd then end up with a video that runs slower than it's meant to.
Edit: Here's a rough sketch of the code that I'm using, in case it helps anyone in a similar situation.

These are compression artifacts, and there is not much you can do for now...
Video codecs are built mainly with the idea of showing real-life colors and shapes, a bit like JPEG with a really low quality. They will also do their best to keep as less information as they can between keyframes (some using motion detection algorithm) so that they need less data to be stored.
These codecs normally have some configurable settings that will allow us to improve the constant-quality of the encoding, but MediaRecorder's specs being codec agnostic, they didn't provide (yet) an option in the API for us web-devs to set any other option than a fixed bit-rate (which won't help us more in here).
There is this proposal, which asks for such a feature though.

Related

Do plain <video> element or video plugins load faster?

It might be a duplicate, but I didn't quite find the answer, so what is more efficient - using just a plain video element (if i dont need support for ie8, for example and all the fancy stuff), or using some plugins for video like video.js?
My main concern is load time - is there something that makes video files load faster when using video plugins?
With a plugin (be it Flash or a framework like video.js) you have the load/initialization time of the plugin itself to consider, though that will be largely minimal compared to the pre-load time of the video. Some Flash players have a fairly large .swf file though most (like most of the .js frameworks you'll come across) are pretty well optimized these days.
The largest chunk of time, with either, will be the pre-buffering to get to a state where the video can play through without further buffering. This will be impacted by the resolution, framerate etc of the video itself and optimizations like the positioning of the MOOV atom (if it's at the start of an .mp4 for instance then the browser doesn't have to read/cache the entire clip before it will start playing)
Flash (a plugin) may have better support for different types of media (for instance if you want to use a fMPEG format like HLS or DASH) but may restrict the devices/platforms you can run on. HTML5 and the <video> tag will - as you point out - only be available on more modern browsers and with or without a player framework to extend it offers a lot of flexibility (and capabilities are improving with time). This is a good overview of pros and cons.

HTML5 audio - currentTime attribute inaccurate?

I'm playing around a bit with the HTML5 <audio> tag and I noticed some strange behaviour that has to do with the currentTime attribute.
I wanted to have a local audio file played and let the timeupdate event detect when it finishes by comparing the currentTime attribute to the duration attribute.
This actually works pretty fine if I let the song play from the beginning to the end - the end of the song is determined correctly.
However, changing the currentTime manually (either directly through JavaScript or by using the browser-based audio controls) results in the API not giving back the correct value of the currentTime anymore but seems to set it some seconds ahead of the position that's actually playing.
(These "some seconds" ahead are based on Chrome, Firefox seems to completely going crazy which results in the discrepancy being way bigger.)
A little jsFiddle example about the problem: http://jsfiddle.net/yp3o8cyw/2/
Can anybody tell me why this happens - or did I just not getting right what the API should do?
P.S.: I just noticed this actually only happens with MP3-encoded files, OGG files are totally doing fine.
After hours of battling this mysterious issue, I believe I have figured out what is going on here. This is not a question of .ogg vs .mp3, this is a question of variable vs. constant bitrate encoding on mp3s (and perhaps other file types).
I cannot take the credit for discovering this, only for scouring the interwebs. Terrill Thompson, a gentlemen and scholar, wrote a detailed article about this problem back on February 1st, 2015, which includes the following excerpt:
Variable Bit Rate (VBR) uses an algorithm to efficiently compress the
media, varying between low and high bitrates depending on the
complexity of the data at a given moment. Constant Bit Rate (CBR), in
contrast, compresses the file using the same bit rate throughout. VBR
is more efficient than CBR, and can therefore deliver content of
comparable quality in a smaller file size, which sounds attractive,
yes?
Unfortunately, there’s a tradeoff if the media is being streamed
(including progressive download), especially if timed text is
involved. As I’ve learned, VBR-encoded MP3 files do not play back with
dependable timing if the user scrubs ahead or back.
I'm writing this for anyone else who runs into this syncing problem (which makes precise syncing of audio and text impossible), because if you do, it's a real nightmare to figure out what is going on.
My next step is to do some more testing, and finally to figure out an efficient way to convert all my .mp3s to constant bit rate. I'm thinking FFMPEG may be able to help, but I'll explore that in another thread. Thanks also to Loilo for originally posting about this issue and Brad for the information he shared.
First off, I'm not actually able to reproduce your problem on my machine, but I only have a short MP3 file handy at the moment, so that might be the issue. In any case, I think I can explain what's going on.
MP3 files (MPEG) are very simple streams and do not have absolute positional data within them. It isn't possible from reading the first part of the file to know at what byte offset some arbitrary frame begins. The media player seeks in the file by needle dropping. That is, it knows the size of the entire track and roughly how far into the track your time offset is. It guesses and begins decoding, picking up as soon as it synchronizes to the next frame header. This is an imprecise process.
Ogg is a more robust container and has time offsets built into its frame headers. Seeking in an Ogg file is much more straightforward.
The other issue is that most browsers that support MP3 do so only because the codec is already available on your system. Playing Ogg Vorbis and MP3 are usually completely different libraries with different APIs. While the web standards do a lot to provide a common abstraction, minor implementation details cause quirks like you are seeing.

Perfect Synchronization with Web Audio API

I'm working on a simple audio visualization application that uses a Web Audio API analyzer to pull frequency data, as in this example. Expectedly, the more visual elements I add to my canvases, the more latency there is between the audio and the yielded visual results.
Is there a standard approach to accounting for this latency? I can imagine a lookahead technique that buffers the upcoming audio data. I could work with synchronizing the JavaScript and Web Audio clocks, but I'm convinced that there's a much more intuitive answer. Perhaps it is as straightforward as playing the audio aloud with a slight delay (although this is not nearly as comprehensive).
The dancer.js library seems to have the same problem (always has a very subtle delay), whereas other applications seem to have solved the lag issue entirely. I have insofar been unable to pinpoint the technical difference. SoundJS seems to handle this a bit better, but it would be nice to build from scratch.
Any methodologies to point me in the right direction are much appreciated.
I think you will find some answers to precise audio timing in this article:
http://www.html5rocks.com/en/tutorials/audio/scheduling/
SoundJS uses this approach to enable smooth looping, but still uses javascript timers for delayed playback. This may not help you sync the audio timing with the animation timing. When I built the music visualizer example for SoundJS I found I had to play around with the different values for fft size and tick frequency to get good performance. I also needed to cache a single shape and reuse it with scaling to have performant graphics.
Hope that helps.
I'm concerned when you say the more visual elements you add to your canvases, the more latency you get in audio. That shouldn't really happen quite like that. Are your canvases being animated using requestAnimationFrame? What's your frame rate like?
You can't, technically speaking, synchronize the JS and Web Audio clocks - the Web Audio clock is the audio hardware clock, which is literally running off a different clock crystal than the system clock (on many systems, at least). The vast majority of web audio (ScriptProcessorNodes being the major exception) shouldn't have additional latency introduced when your main UI thread becomes a bit more congested).
If the problem is the analysis just seems to lag (i.e. the visuals are consistently "behind" the audio), it could just be the inherent lag in the FFT processing. You can reduce the FFT size in the Analyser, although you'll get less definition then; to fake up fixing it, you can also run all the audio through a delay node to get it to re-sync with the visuals.
Also, you may find that the "smoothing" parameter on the Analyser makes it less time-precise - try turning that down.

JavaScript detect if hardware acceleration is enabled

Is it possible to detect to see if a browser has support for hardware accelerated page rendering and is it also possible to see if it has been enabled? I am aware that Firefox 4, IE9 and Chrome support it, but it may or may not be enabled due to the version of the browser, the OS, and the computer hardware itself.
Is this possible using some form of DOM sniffing?
Like all other browser-specific capabilities, probably the best thing you can do is to devise some sort of feature test and actually measure the performance of what matters to you.
You can do when the first page on your site is loaded and then set a cookie with the setting so you only have to do it every once-in-a-while and don't have to do it again for awhile as long as the cookie is present.
Depending upon what type of performance you care the most about, you can devise a test that's pretty specific to that. For example if you cared a lot about image scaling, you could devise a JS test that scales an image among a bunch of different sizes in a big enough loop to get a measurable time and then decide what your timing threshold is.
Not only would this be better than a single binary choice of accelaration on or off, but it would also be able to test the specific features you care about and be able to see how fast they actually are.
I recently discovered a handy command-line switch for Chrome (I am using v. 16.0.912) that results in red borders being painted around HTML (CSS3) elements that are actually being hardware accelerated. You can read more details in a blog that I have published on this subject.

Displaying a local gstreamer stream in a browser

I have a camera feed coming into a linux machine using a V4l2 interface as the source for a gstreamer pipeline. I'm building an interface to control the camera, and I would like to do so in HTML/javascript, communicating to a local server. The problem is getting a feed from the gst pipeline into the browser. The options for doing so seem to be:
A loopback from gst to a v4l2 device, which is displayed using flash's webcam support
Outputting a MJPEG stream which is displayed in the browser
Outputting a RTSP stream which is displayed by flash
Writing a browser plugin
Overlaying a native X application over the browser
Has anyone had experience solving this problem before? The most important requirement is that the feed be as close to real time as possible. I would like to avoid flash if possible, though it may not be. Any help would be greatly appreciated.
You already thought about multiple solutions. You could also stream in ogg/vorbis/theora or vp8 to an icecast server, see the OLPC GStreamer wiki for examples.
Since you are looking for a python solution as well (according to your tags), have you considered using Flumotion? It's a streaming server written on top of GStreamer with Twisted, and you could integrate it with your own solution. It can stream over HTTP, so you don't need an icecast server.
Depending on the codecs, there are various tweaks to allow low-latency. Typically, with Flumotion, locally, you could get a few seconds latency, and that can be lowered I believe (x264enc can be tweaked to reach less than a second latency, iirc). Typically, you have to reduce the keyframe distance, and also limit the motion-vector estimation to a few nearby frames: that will probably reduce the quality and raise the bitrate though.
What browsers are you targeting? If you ignore Internet Explorer, you should be able to stream OGG/Theora video and/or WebM video direct to the browser using the tag. If you need to support IE as well though you're probably reduced to a flash applet. I just set up a web stream using Flumotion and the free version of Flowplayer http://flowplayer.org/ and it's working very well. Flowplayer has a lot of advanced functionality that I have barely even begun to explore.

Categories

Resources