Downscaling/resizing a video during upload to a remote website - javascript

I have a web application written in Ruby on rails that uploads videos from the user to the server using a form (I actually use a jquery uploader that uploads direct to s3, but I dont think this is relevant).
In order to decrease the upload time for a video I want to downscale it e.g. if the video size is 1000x2000 pixels I want to downscale it to 500x1000. Is there a way to do so while the video uploads on the client side? Is there a javascript library that can do that?

Recompressing a video is a non-trivial problem that isn't going to happen in a browser any time soon.
With the changes in HTML5, it is theoretically possible if you can overcome several problems:
You'd use the File API to read the contents of a file that the user selects using an <input type="file"> element. However, it looks like the FileReader reads the entire file into memory before handing it over to your code, which is exactly what you don't want when dealing with large video files. Unfortunately, this is a problem you can do nothing about. It might still work, but performance will probably be unacceptable for anything over 10-20 MB or so.
Once you have the file's data, you have to actually interpret it – something usually accomplished with a demuxer to split the continer (mpeg, etc) file into video and audio streams, and a codec to decompress those streams into raw image/audio data. Your OS comes with several implementations of codecs, none of which are accessible from JavaScript. There are some JS video and audio codec implementations, but they are experimental and painfully slow; and only implement the decompressor, so you'd be stuck when it comes to creating output.
Decompressing, scaling, and recompressing audio and video is extremely processor-intensive, which is exacty the kind of workload that JavaScript (and scripting languages in general) is the worst at. At the very minimum, you'd have to use Web workers to run your code on a separate thread.
All of this work has been done several times over; you're reinventing the wheel.
Realistically, this is something that has to be done server-side, and even then it's not a trivial endeavor.
If you're desperate, you could try something like a plugin/ActiveX control that handles the compression, but then you have to convince users to install a plugin (yuck).

You could use a gem like Carrierwave (https://github.com/jnicklas/carrierwave). It has the ability to process files before storing them. Even if you upload them directly to S3 first with javascript, you could then have Carrierwave retrieve the file, process it, and store it again.
Otherwise you could just have Carrierwave deal with the file from the beginning (unless you are hosting with Heroku and need to avoid the timeouts by going direct to S3).

Related

Decoding opus in chunks using AudioDecoder

I'm developing an application that has business logic that heavily depends on sound precision, hence I'm using the Web Audio API. The app has the functionality to manipulate audio files in multiple ways (fade in and fade out, programmatically seek through audios, play one audio multiple times, etc).
I've tried to implement this functionality using MediaElementAudioSourceNode, but I had a hard time getting everything together. Using this type of node, I wasn't sure how to implement some of the features like scheduling start time in the AudioContext timeline.
In the first iteration, I've implemented a simple download and decode method where the audio files are first downloaded as a whole from the network, stored to IndexedDB for caching purposes, then decoded into AudioBuffer and played as AudioBufferSourceNode connected to AudioContext.
As we already know, the decoding function is quite slow for larger files. After running some tests, I've realized that decoding OPUS is slower than decoding MP3. Also, decoding MP3 is slower than decoding WAV.
For that reason, in the second implementation, I've used decode-audio-data-fast and MP3 files. The decoding process is now faster as they're decoded in chunks on multiple threads, but I have to download bigger files.
However, I'd like to avoid downloading and decoding if possible. If this is not possible, I'm wondering if I can progressively download files and feed the chunks to the new WebCodecs API, then route them to AudioContext to be played and manipulated. The files are hosted on an S3 bucket. In theory, as far as I know, this should be completely possible, however, due to a lack of examples and documentation, and probably my experience with this, I can't figure out how exactly to implement this.

Video compression when uploading a video in javascript(react) web application

In my react application i have included a image and video uploading feature. For images, i'm compressing the image before uploading into the server. Now i need to do the same for the videos as well. But i'm not sure if the video compression should be done before uploading (From the Front-end) or after uploading (From the back-end). What would be the best way to do this considering the performance and efficiency?
Thanks.
For this kind of dedicated and isolated feature, I would really prefer a microservice which sit between frontend and backend (preferably in the same data center as your server).
If you've got good budget some third-party API is presumably performant and trouble-free, like coconut
For uploads from the web, you’re better off compressing server-side. Compression on the client side is going to be quite CPU heavy and it won’t be a good user experience if their computer freezes for long durations while interacting with your site. Not only that, you’ll have to figure out a way run ffmpeg or a similar tool using web-workers in the browser and it’s mostly not worth the headache.
People generally setup a transcoding pipeline that can compress, resize or convert the formats of the user-generated videos in a batch process usually with ffmpeg or use other cloud-based SAAS platforms if you don’t want to do all the heavy lifting yourself.
Full Disclaimer: we had a similar requirement and ended up starting mediamachine.io because most providers were too expensive for our needs.

Cordova | Sending videos to mobile phone

I'm looking for a way to transfer video files to a client's mobile without streaming. The reason is (client's request) to eliminate the cost of such a server due to an expected all-at-once high traffic.
So I have looked on base64 encoding, below is the time it takes to get the 19mb file (one with 100mb internet connection, second with a 3G connection). This could make the waiting painful, especially on 3G connection.
I have also considered using byte array to significantly reduce the file size, but it's hard passing it via JSON with all the escaping backslashes...
Finally, I have looked on another possible solution, and that is to transfer a video directly to the client's phone while the app is closed (pushing a notification when the file has uploaded in the client's phone), but that is probably one of Cordova's limitation (as far as i'm aware).
I'm searching a solution for this for weeks now, therefore I have placed a bounty on it, since I believe it's a question worth answering. Somebody someday will thank us for it. :) I'll be the first.
Much thanks, and happy coding.
Hosting vs app serving
First of all you need to understand that no matter where the file is coming from - a file server (streaming) or application server (base64 encoded string), the hosting costs are going to be the similar (well, a file hosting server should be more effecient than anything you write, but that's a minor difference). You still need to store the file somewhere and you still need to send it over the network. The difference is only that in one case Apache/IIS/whatever server you use is handling all the complex stuff and in the second case you are going to be trying to recreate it all yourself.
Streaming vs Non-Streaming
When you serve a file (be it yourself, or through a file server) you can either allow it to be retrieved in chunks (streamed) or only as a huge big file. In the first case - streaming - if the user stops watching halfway through the video you will only need the server capacity to serve like 60 or 70% of the file. In the second case - non-streaming - you need to have the user first wait for the file to be retrieved in it's entirety and on top of that it will always cost you 100% power.
Precaching files
That's not to say nothing can be optimized. For example, if you are distributing a single file every week on saturday 6 pm, yet already know a full week before hand what that file is you could theoretically encrypt the file and serve it in the background distributed over the course of the entire week. And yes, you could even do that whilst building a Cordova application (though it will be a bit harder and you might end up writing your own plugin). Still though, that situation is incredibly rare and is definitely not worth the development time except in rare cases (e.g. it's often done with game files, but that's tens of GBs of data downloaded tens of thousands of times).

How do I compress multiple Web Audio sources/tracks into one?

We are making an web based music editor and mixer based on the Web Audio api. Users can mix together multiple tracks, crop tracks, etc. The actual mixing together of the tracks just involves playing back all the sources at once.
We want to be able to add the option to save the mix and make it available for download to a user's computer. Is there some way to do this on the front end (like connecting all the sources to one destination/export node), or even the backend (we are using RoR)?
RecorderJS does exactly what you need, and it could not possibly be easier to use. Really, really great library.
https://github.com/mattdiamond/Recorderjs
P.S. Look into OfflineAudioContext and my answer to this question (Web audio API: scheduling sounds and exporting the mix) for info on doing a faster-than-realtime mixdown of your audio.
Users data looks to be on client side?
Basically when converting data with base64 into dataURI, datas are diplayed inline so they can be add 1 by 1 togheter into one single blob object, and be downloaded.
But this method is only good for smalls files, causing crash and freezing with most browser, this is only good for blob size less than 10mb after some personnal tests, this will be better soon for sure.
<audio controls><source src="data:audio/ogg;base64,BASE64.......BASE564......BASE64............."></audio>
or
<a href="data:audio/ogg;base64,BASE64...BASE64..BASE64....>Download</a>
Probably not your way, just an idea but your project is interesting ;)

Uncompressing content in browser on client side

I am interested to know about the possibilities of reducing http requests on servers by sending different kind of contents in a single compressed files and later uncompress on client's browser and place the stuff(images,css,js) where it should be.
I read somewhere that firefox is working on plan to give such features in future releases but it has not been done yet plus it would not be a standard version.
Will you guys suggest any solution for this?can Flash be used to uncompress compressed files on client side for later use?
Thanks
We did more or less what you describe in our web an are extremely happy of the response time.
The original files are all separated (HTML, CSS, JS, images) and we develop on them.
Then when moving to production we have a shell script that:
use YUI compressor to compress CSS and JS
all images are read and converted to data:image/png;base64,...
all blank spaces and comments are removed from the HTML
all these resources are put inline in the HTML
The page is ~300kb and usually cached.The server gzip it, the real size travelling the network is then lower.We don't use any additional compression.
And then there is a second call to get the data(JSON for us) and start rendering it client side.
I had to read your question a few times before I got what you were asking. It sounds like you want to basically combine all the elements of your site into a single downloadable file.
I'm fairly confident in saying I don't believe this is possible or desirable.
Firstly, you state that you've heard that Firefox may be supporting this. I haven't heard about that, but even if they do, how will you be able to use the feature while still supporting other browsers?
But even if you can do it, you've tagged this as 'performance-tuning', on the grounds that you'll be saving a few http requests. But in your effort to save http requests to speed things up, you need to be cautious that you don't actually end up slowing things down.
Combining all the files may cut you down to one http request, but your site may then load slower as the whole thing would need to load before any of it would be ready for display (as opposed to a normal page load where your page load may take time but at least some of it may be ready for display quite quickly).
What you can do right now, and which will be useful for reducing http requests, is combine your stylesheets into a single CSS, your scripts into a single JS file, and groups of related images into single image files (google CSS Sprites for more info on this technique).
Even then, you need to be careful about which files you combine - the point of the exersise is to reduce http requests so you need to take advantage caching, or you'll end up making things worse rather than better. Browsers can only cache files that are the same over multiple pages, so you should only combine the files that won't change between page loads. So for example, only combine the Javascript files which are in use across all the pages on your site.
My final comment would be to re-iterate what I've already said: Be cautious about over-optimising to the point that you actually end up slowing things down.

Categories

Resources