How to automatically compress photos/videos for website when uploaded? - javascript

I am working on a website in which I give users the possibility to upload pictures and videos, how would I automatically compress those videos/pictures server-side before storing them on my server/database. because I don't want abnormally large files to slow down my website, if I was uploading myself I could obviously resize and optimize myself, but is there a way I can do this automatically for my users?

Well, that is a wide question and answer depends on type of the files and algorithm you decide to select.
For images, you can simply use JPG and select desired percentage quality (the smaller, the better size, but worse looking resulting picture). Example: http://blog.clonesinfo.com/how-to-reduce-compress-image-file-size-uploading-using-php-code/
If you want more options or for example lossless quality, you definitely should look for some library or tool, look in this question for some more info: Which is the best PHP method to reduce the image size without losing quality
For videos, it gets a little more complicated, as making a video smaller requires re-encoding it, and also picking the right settings (the codec you usually pick will be the most compatible and efficient one – H.264, or something like VP9 from Google). Note that re-encoding requires significant amount of processing power on your server (might start to be an issue if videos are large and long). Video encoding is a very wide topic which I cannot cover here in 1 response, you can start with googling around how H.264 works.
For video encoding you're also going to need a tool, probably the best choice will be ffmpeg/avconv, plus some PHP library to make it easier to use.

Related

Ways to render images sequence from Canvas

Context
I'm creating some animated graphics using Canvas. I would like to save images sequence of them. If I do it through a Web Browser, for some obvious reasons of security, it will ask me to save manually each files. I need to work around this.
I also need to render the images lossless with an alpha channel, that's why I'm using PNG images sequence and nothing else. Images sequences can be huge in matter of size (e.g. a full HD sequence of 2 minutes at 30 frames/s will easily exceed 1 Go)
Question
What could be the workarounds ? I think that using Node.js could be useful because being server-side should allows me to saves the images sequence without awaiting confirmations. Unfortunately, I don't know it very well, that's one of the reason, I'm asking.
I'm also seeking for a "comfortable" solution. Below, someone seems able to do it using Python and MIME, it seems really ponderous to do and slow.
Googling
Exporting HTML canvas as an image sequence but it doesn't talk about the Node.js solution
Saving a sequence of images in a web canvas to disk But not the same context, he is providing a Web service for some clients.
https://forum.processing.org/two/discussion/19218/how-to-render-p5-js-sketch-as-a-movie doesn't bring any solution, but confirm what I've explained.
https://github.com/spite/ccapture.js/#limitations this but it doesn't allows me to export PNG images, only video, this isn't what I searching for.
http://jeremybouny.fr/en/articles/server_side_canvas_node/
Disclaimer
I'm not a native English speaker, I tried to do my best, please, feel free to edit it if something is badly written

Base64 video encoding - good\bad idea? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm working on a mobile front-end project using cordova, and the backend developer i'm working with insists that the media files (images/videos) should be transferred as base64 encoded in json files.
Now, with images it's so far working. Although it freezes the UI for a few seconds, it can be deferred somehow.
The videos however are so far a pain to handle, the length of a single/simple video being transferred is nearly 300,000 in length. It brings my poor laptop through a wild spin, and gets the uri after about 20 seconds of going through the code (and it still not working, and I don't feel like debugging it cause it nearly crashes my laptop with every refresh).
So my questions are:
Is base64 encoding a popular way of transferring media in mobile development?
And if not, what alternative way would you recommend using to transfer/present these videos?
I should mention though, the videos are meant to be viewed at once by a large number of people (hundreds perhaps), and the other developer says that their servers can't handle such traffic.
Much thanks for any advice, I couldn't find this info nowhere. :)
[...] the backend developer [...] insists that the media files (images/videos) should be transferred as base64 encoded in json files.
This is a very bad (and silly) idea up-front. You do not want to transfer large amount of binary data as strings. Especially not Unicode strings.
Here you need to arm up and convince your backend dev rebel to change his mind with what-ever it takes, play some Biber or Nickelback, or even change his background image to something Hello Kitty, or take a snapshot of his screen, set it as background and hide all the icons and the bar. This should help you changing his mind. If not, place a webasto in his office at max and lock all doors and windows.
Is base64 encoding a popular way of transferring media in mobile development?
It is popular and has a relative long history and became very common on Usenet and so forth. In those days however, the data amount was very low compared to today as all data where transferred over modems.
However, just because it is popular doesn't mean it is the right tool for everything. It is not very efficient as it require an encoding process which convert three octets into four bytes, causing an addition of 33% to the size.
On top of that: in JavaScript each string char is stored as two bytes due to Unicode char-set so your data is doubled and extended 33%. Your 300 mb data is now 300 x 2 x 1.33 = 798 mb (show that to your backdev! :) as it's a real factor if the servers cannot handle large amount of traffic).
This works fine for smaller files but for larger file as in your example this can cause a significant overhead in both time and memory usage, and of course bandwidth. And of course, on server side you would need to reverse the process with its own overhead.
And if not, what alternative way would you recommend using to transfer/present these videos?
I would recommend:
Separate meta-data out as JSON with a reference to the data. No binary data in the JSON.
Transfer the media data itself separately in native bytes (ArrayBuffer).
Send both at the same time to server.
The server then only need to parse the JSON data into edible for the backend, the binary data can go straight to disk.
Update I forgot to mention, as Pablo does in his answer, that you can look into streaming the data.
However, streaming is pretty much a synonym with buffering so the bandwidth will be about the same, just provided in a more brute-force way (usually UDP versus TCP, ie. loss of packets doesn't break the transfer). Streaming with limit your options more than buffering in the client though.
My 2 cents...
Not sure why "33% overhead" is always mentioned, when that's complete nonsense. Yes, it does initially roughly add that amount, however there's a little thing called gzip (ever heard of it?). I've done tons of tests and the difference is typically negligible. In fact, sometimes the gzipped base64 string is actually smaller than the binary file. Check out this guy's tests. So please, can we stop spreading absolute fiction.
Base64 is a perfectly acceptable method of retrieving a video. In fact, it works amazing for a private messaging system. For instance, if you were using AWS S3, you could store the files privately so there is no URL.
However, the main disadvantage (imho) of using a gzipped base64 video is that you need to wait for the whole video to load, so pseudo-streaming is out of the question.
Base64 is a convenient (but not efficient) way of transferring binary data. It's inefficient because transfer size will be 33% bigger than what you're originally transferring. Si it's not a popular way of transmitting video. If you are planning to stream that video, you should be looking for a established protocol for doing just that.
I would recommend a streaming protocol (there are a lot where you can chose from).
I think is bad idea, video files is large. But you can try with small video files.
Try online encoder https://base64.online/encoders/encode-video-to-base64
There you can convert video to Base64 Data URI, and try to insert in HTML
Result like this:
<video controls><source src="data:video/mpeg;base64,AAABuiEAAQALgBexAAABuwAMgBexBeH/wMAg4ODgAAA..."></video>

Display low resolution image and then high resolution image after few seconds (HTML)

I want to display lots of images on a HTML webpage, but want to display a low resolution image at first and then display a high resolution image after few seconds.
Reason is because client internet connection speed can't be determined, so i need to optimize the way my website loads its images.
I have actually gone through this link on Stackoverflow Fast Image loading methods, low to high res with multiple backgrounds - javascript solution?
Tried to run the sample javascript code found in the answer but didnt seem to work.
And then i read about Progressive JPEG's, but don't know how to go about it also.
Please any help will be appreciated.
It depends how you generate/create your images as to whether they will be progressive or not. If your images are not progressive, and you have ImageMagick installed (many Linuuxes do) you can convert an image from non-progressive (also known as baseline JPEG) to progressive with this command and try it out on your website:
convert nonProgressive.jpg PJPEG:Progressive.jpg
ImageMagick is available for Windows, OSX, Linux for free from here.
Another way to minimise image sizes is with jhead, and the following command strips out all EXIF information from your image to make it smaller - information removed is things like GPS coordinates, date and time picture was taken, camera model and focal length and shutter speed.
jhead -purejpg image.jpg
Updated Answer
In response to your further question about doing ALL your images, I am not here to tell you what to do! It is your website - you can do as you wish. I was merely suggesting a way for you to try it out on an image and see if you like the results and the performance. If you want to apply to it all your images, it is quite easy, either using standard tools, or GNU Parallel which will do the job in a fraction of the time by using all your CPU cores. Whatever you do, I would urge you to make a backup first in case anything goes wrong or you later decide progressive, EXIF-stripped JPEGs are not for you.
So, after making a backup, you could do one of these options assuming your website is in /var/www:
find /var/www -iname "*.JPG" -exec convert "{}" "PJPEG:{}" \;
or the same again, with EXIF-stripping, and also colour profile, stripping:
find /var/www -iname "*.jpg" -exec convert "{}" -strip "PJPEG:{}" \;
Or you could use GNU Parallel, like this to use all your CPU cores:
find /var/www -iname "*.jpg" | parallel convert "{}" -strip "PJPEG:{}"

Downscaling/resizing a video during upload to a remote website

I have a web application written in Ruby on rails that uploads videos from the user to the server using a form (I actually use a jquery uploader that uploads direct to s3, but I dont think this is relevant).
In order to decrease the upload time for a video I want to downscale it e.g. if the video size is 1000x2000 pixels I want to downscale it to 500x1000. Is there a way to do so while the video uploads on the client side? Is there a javascript library that can do that?
Recompressing a video is a non-trivial problem that isn't going to happen in a browser any time soon.
With the changes in HTML5, it is theoretically possible if you can overcome several problems:
You'd use the File API to read the contents of a file that the user selects using an <input type="file"> element. However, it looks like the FileReader reads the entire file into memory before handing it over to your code, which is exactly what you don't want when dealing with large video files. Unfortunately, this is a problem you can do nothing about. It might still work, but performance will probably be unacceptable for anything over 10-20 MB or so.
Once you have the file's data, you have to actually interpret it – something usually accomplished with a demuxer to split the continer (mpeg, etc) file into video and audio streams, and a codec to decompress those streams into raw image/audio data. Your OS comes with several implementations of codecs, none of which are accessible from JavaScript. There are some JS video and audio codec implementations, but they are experimental and painfully slow; and only implement the decompressor, so you'd be stuck when it comes to creating output.
Decompressing, scaling, and recompressing audio and video is extremely processor-intensive, which is exacty the kind of workload that JavaScript (and scripting languages in general) is the worst at. At the very minimum, you'd have to use Web workers to run your code on a separate thread.
All of this work has been done several times over; you're reinventing the wheel.
Realistically, this is something that has to be done server-side, and even then it's not a trivial endeavor.
If you're desperate, you could try something like a plugin/ActiveX control that handles the compression, but then you have to convince users to install a plugin (yuck).
You could use a gem like Carrierwave (https://github.com/jnicklas/carrierwave). It has the ability to process files before storing them. Even if you upload them directly to S3 first with javascript, you could then have Carrierwave retrieve the file, process it, and store it again.
Otherwise you could just have Carrierwave deal with the file from the beginning (unless you are hosting with Heroku and need to avoid the timeouts by going direct to S3).

Saving Div Content As Image On Server

I have been learning a bit of jQuery and .Net in VB. I have created a product customize tool of sorts that basically layers up divs and add's text, images etc on top of a tshirt.
I'm stuck on an important stage!
I need to be able to convert the content of the div that wraps all these divs of text and images to one flat image taking into account any CSS that has been applied to it also.
I have heard of things that I could use to screen capture the content of a browser on the server which could be possible for low res thumbs etc, but it sounds a little troublesome! and it would really be nice to create an image of high res.
I have also heard to converting the html to html5 canvas then writing that out... but looks too complicated for me to fathom and browser support is an issue.
Is this possible in .NET?
Perhaps something with javascript could be done?
Any help or guidance in the correct direction would be appreciated!
EDIT:
I'm thinking perhaps I could do with two solutions for this. Ideally I would end up with a normal res jpg/png etc for displaying on the website, But also a print ready high res file would be very desirable as well.
PostScript Printer - I have heard of it but I'm struggling to find a good resource to understand it for a beginner (especially with wiki black out). Perhaps I could create a html page from my div content and send it to print to a EPS file. Anyone know any good tutorials for this?
We did this... about 10 years ago. Interestingly, the tech available really hasn't changed too much.
update - Best Answer
Spreadshirt licenses their product: http://blog.spreadshirt.net/uk/2007/11/27/everyones-a-designer-free-designers-for-premium-partners/
Just license it. Don't do this yourself, unless you have real graphics manipulating and print production experience. I'd say in today's world you're looking at somewhere around 4,000 to 5,000 hours of dev time to duplicate what they did... And that's if you have two top tier people working on it.
Short answer: you can't do it in html.
Slightly longer answer:
It doesn't work in part because you can't screen cap the client side and get the level of resolution needed for production type printing. Modern screen resolution is usually on the order of 100 ppi. For a decent print you really need something between 3 and 6 times that density. Otherwise you'll have lots of pixelation and it will generally look like crap when it comes out.
A different Answer:
Your best bet is to leverage something like SVG (scalable vector graphics) and provide a type of drawing surface to the browser. There are several ways of doing this using Flash (Spreadshirt.com uses this) or Silverlight (not recommended). We used flash and it was pretty good.
You might be able to get away with using HTML 5. Regardless, whatever path you pick is going to be complicated.
Once the user is happy with their drawing and wants to print it out, you create the final file and run a process to convert it to Postscript or whatever format your t-shirt provider needs. The converter (aka RIP software) is going to either take a long time to develop or cost a bunch of money... pick one. (helpful hint: buy it. Back then, we spent around $20k US and it was far cheaper than trying to develop).
Of course, this ignores issues such as color matching and calibration. This was actually our primary problem. Everyone's monitor is slightly different and what looks like red on one machine is pink on another.
And for a little background, we were doing customized wrapping paper. The user added text, selected images from our library or uploaded their own, and picked a pattern. Our prints came out on large-format HP Inkjet printers (36" and 60" wide). Ultimately we spent between $200k and $300k just on dev resources to make it happen... and it did, unfortunately, the price point we had to sell at was too high for the market.
If you can use some server-side tool, check phantomjs. This is a headless webkit browser (with no gui) which can take a page's screenshot, an uses a javascript api. It should do the trick.
Send the whole div with user generated content back to server using ajax call.
Generate an HTML Document on server using 'HtmlTextWriter' class.
Then you can convert that HTML file using external tools like
(1) http://www.officeconvert.com/products_website_to_image.htm#easyhtmlsnapshot
(2) http://html-to-image.acasystems.com/faq-html-to-picture.htm
which are not free tools, but you can use them by creating new Process on server.
The best option I came across is wkhtmltopdf. It comes with a tool called wkhtmltoimage. It uses QtWebKit (A Qt port of the WebKit rendering engine) to render a web page, and converts the result to PDF or image format of your choice, all done at server side.
Because it uses WebKit, it renders everything (images, css and even javascript) just like a modern browser does. In my use case, the results have been very satisfying and are almost identical to what browsers would render.
To start, you may want to look at how to run external tools in .NET:
Execute an external EXE with C#.NET

Categories

Resources