Getting load data about YouTube videos with JavaScript - javascript

I am writing a chrome extension that inserts JavaScript into webpages so that things such as load times can be analyzed (for troubleshooting purposes), and I was wondering if there is any way to gather data about youtube videos such as the size of the video, buffering speed, etc. using JavaScript

You should look into the Youtube APIs. You could at least retrieve the length of the video. Apart from that, Buffer speed is most likely out of the scope of client side javascript. You'd have to resort to a native application plugin for that.

Related

Decoding opus in chunks using AudioDecoder

I'm developing an application that has business logic that heavily depends on sound precision, hence I'm using the Web Audio API. The app has the functionality to manipulate audio files in multiple ways (fade in and fade out, programmatically seek through audios, play one audio multiple times, etc).
I've tried to implement this functionality using MediaElementAudioSourceNode, but I had a hard time getting everything together. Using this type of node, I wasn't sure how to implement some of the features like scheduling start time in the AudioContext timeline.
In the first iteration, I've implemented a simple download and decode method where the audio files are first downloaded as a whole from the network, stored to IndexedDB for caching purposes, then decoded into AudioBuffer and played as AudioBufferSourceNode connected to AudioContext.
As we already know, the decoding function is quite slow for larger files. After running some tests, I've realized that decoding OPUS is slower than decoding MP3. Also, decoding MP3 is slower than decoding WAV.
For that reason, in the second implementation, I've used decode-audio-data-fast and MP3 files. The decoding process is now faster as they're decoded in chunks on multiple threads, but I have to download bigger files.
However, I'd like to avoid downloading and decoding if possible. If this is not possible, I'm wondering if I can progressively download files and feed the chunks to the new WebCodecs API, then route them to AudioContext to be played and manipulated. The files are hosted on an S3 bucket. In theory, as far as I know, this should be completely possible, however, due to a lack of examples and documentation, and probably my experience with this, I can't figure out how exactly to implement this.

Possible to process all sounds from webpage with javascript simultaneously?

I've played around with the web audio api before, but it's a very specific and unambiguous that it wants to connect all the sources together to the destination for it to work. However it seems given how iframes and other ways sound can be introduced there'd need to be an extremely elaborate script to tie in new sources to an analyzer node headed to the output.
Note I'm not talking about routing a stream from a microphone like here, there shouldn't be any permission required. Also I'm not talking about audio sources already hooked into the web audio api, there are a ton of examples about processing audio from inside the web audio api. I'm curious if there's a generic way to process audio on a page before (or after) it hits the speakers.
Essentially I was curious if anyone has seen or built an application that's reactive to audio in html and or has thoughts on putting something like this together.
The solution I have in mind would be a script which triggers when media is played and attaches the media source to the web audio api > an analyzer node > destination. I haven't found any javascript event that appears to work in this way.

How do I compress multiple Web Audio sources/tracks into one?

We are making an web based music editor and mixer based on the Web Audio api. Users can mix together multiple tracks, crop tracks, etc. The actual mixing together of the tracks just involves playing back all the sources at once.
We want to be able to add the option to save the mix and make it available for download to a user's computer. Is there some way to do this on the front end (like connecting all the sources to one destination/export node), or even the backend (we are using RoR)?
RecorderJS does exactly what you need, and it could not possibly be easier to use. Really, really great library.
https://github.com/mattdiamond/Recorderjs
P.S. Look into OfflineAudioContext and my answer to this question (Web audio API: scheduling sounds and exporting the mix) for info on doing a faster-than-realtime mixdown of your audio.
Users data looks to be on client side?
Basically when converting data with base64 into dataURI, datas are diplayed inline so they can be add 1 by 1 togheter into one single blob object, and be downloaded.
But this method is only good for smalls files, causing crash and freezing with most browser, this is only good for blob size less than 10mb after some personnal tests, this will be better soon for sure.
<audio controls><source src="data:audio/ogg;base64,BASE64.......BASE564......BASE64............."></audio>
or
<a href="data:audio/ogg;base64,BASE64...BASE64..BASE64....>Download</a>
Probably not your way, just an idea but your project is interesting ;)

Downscaling/resizing a video during upload to a remote website

I have a web application written in Ruby on rails that uploads videos from the user to the server using a form (I actually use a jquery uploader that uploads direct to s3, but I dont think this is relevant).
In order to decrease the upload time for a video I want to downscale it e.g. if the video size is 1000x2000 pixels I want to downscale it to 500x1000. Is there a way to do so while the video uploads on the client side? Is there a javascript library that can do that?
Recompressing a video is a non-trivial problem that isn't going to happen in a browser any time soon.
With the changes in HTML5, it is theoretically possible if you can overcome several problems:
You'd use the File API to read the contents of a file that the user selects using an <input type="file"> element. However, it looks like the FileReader reads the entire file into memory before handing it over to your code, which is exactly what you don't want when dealing with large video files. Unfortunately, this is a problem you can do nothing about. It might still work, but performance will probably be unacceptable for anything over 10-20 MB or so.
Once you have the file's data, you have to actually interpret it – something usually accomplished with a demuxer to split the continer (mpeg, etc) file into video and audio streams, and a codec to decompress those streams into raw image/audio data. Your OS comes with several implementations of codecs, none of which are accessible from JavaScript. There are some JS video and audio codec implementations, but they are experimental and painfully slow; and only implement the decompressor, so you'd be stuck when it comes to creating output.
Decompressing, scaling, and recompressing audio and video is extremely processor-intensive, which is exacty the kind of workload that JavaScript (and scripting languages in general) is the worst at. At the very minimum, you'd have to use Web workers to run your code on a separate thread.
All of this work has been done several times over; you're reinventing the wheel.
Realistically, this is something that has to be done server-side, and even then it's not a trivial endeavor.
If you're desperate, you could try something like a plugin/ActiveX control that handles the compression, but then you have to convince users to install a plugin (yuck).
You could use a gem like Carrierwave (https://github.com/jnicklas/carrierwave). It has the ability to process files before storing them. Even if you upload them directly to S3 first with javascript, you could then have Carrierwave retrieve the file, process it, and store it again.
Otherwise you could just have Carrierwave deal with the file from the beginning (unless you are hosting with Heroku and need to avoid the timeouts by going direct to S3).

Waveform visualization in JavaScript from audio [duplicate]

This question already has answers here:
How to write a web-based music visualizer?
(4 answers)
Closed 5 years ago.
I'm trying to use JavaScript to display the waveform for and audio file, but I don't even know how to get started. I found the Audio Data API, but am unfamiliar with most audio terms and don't really know what is provided or how to manipulate it. I found examples of waveforms in JavaScript, but they are too complicated/I can't comprehend what is going on. Then my question is: how can you use JavaScript to create a waveform of a song on canvas, and what exactly is the process behind it?
Here's some sample code from my book (HTML5 Multimedia: Develop and Design) that does exactly that; Audio Waveform. It uses the Mozilla Audio Data API.
The code simply takes snapshots of the audio data and uses it to draw on the canvas.
Here's an article from the BBC's R&D team showing how they did exactly that to build a couple of JS libraries and more besides. The results all seem to be openly available and rather good.
Rather than use the Audio Data API, which you cannot be sure is supported by all your users' browsers, it might be better if you generate your waveform data server-side (the BBC team created a C++ app to do that) and then at least you are decoupling the client-side display aspect from the playback aspect. Also, bear in mind that the entire audio file has to reach the browser before you can calculate peaks and render a waveform. I am not sure if streaming files (eg MP3) can be used to calculate peaks as the file is coming in. But overall it is surely better to calculate your peaks once, server-side, then simply send the data via JSON (or even create + cache your graphics server-side - there are numerous PHP chart libraries or you can do it natively with GD).
For playback on the browser, there are several good (non-Flash!) options. Personally I like SoundManager 2 as the code is completely decoupled from display, meaning that I am free to create whatever UI / display that I like (or that the client wants). I have found it robust and reliable although I had some initial difficulty on one project with multiple players on the same page. The examples on their site are not great (imho) but with imagination you can do some cool things. SM2 also has an optional Flash fallback option for antique browsers.
I did just that with the web audio api and I used a project called wavesurfer.
http://www.html5audio.org/2012/10/interactive-navigable-audio-visualization-using-webaudio-api-and-canvas.html
What it does is, it draws tiny rectangles and uses an audio buffer to determine the height of each rectangle. Also possible in wavesurfer is playing and pausing using space bar and clicking on the wave to start playing at that point.
Update: This POC website no longer exists.
To check out what I made go to this site:
Update: This POC website no longer exists.
This only works in a google chrome browser and maybe safari but I'm not sure about that.
Let me know if you want more info.
Not well supported yet but take a look at this Firefox tone generator.

Categories

Resources