Ways to render images sequence from Canvas - javascript

Context
I'm creating some animated graphics using Canvas. I would like to save images sequence of them. If I do it through a Web Browser, for some obvious reasons of security, it will ask me to save manually each files. I need to work around this.
I also need to render the images lossless with an alpha channel, that's why I'm using PNG images sequence and nothing else. Images sequences can be huge in matter of size (e.g. a full HD sequence of 2 minutes at 30 frames/s will easily exceed 1 Go)
Question
What could be the workarounds ? I think that using Node.js could be useful because being server-side should allows me to saves the images sequence without awaiting confirmations. Unfortunately, I don't know it very well, that's one of the reason, I'm asking.
I'm also seeking for a "comfortable" solution. Below, someone seems able to do it using Python and MIME, it seems really ponderous to do and slow.
Googling
Exporting HTML canvas as an image sequence but it doesn't talk about the Node.js solution
Saving a sequence of images in a web canvas to disk But not the same context, he is providing a Web service for some clients.
https://forum.processing.org/two/discussion/19218/how-to-render-p5-js-sketch-as-a-movie doesn't bring any solution, but confirm what I've explained.
https://github.com/spite/ccapture.js/#limitations this but it doesn't allows me to export PNG images, only video, this isn't what I searching for.
http://jeremybouny.fr/en/articles/server_side_canvas_node/
Disclaimer
I'm not a native English speaker, I tried to do my best, please, feel free to edit it if something is badly written

Related

How to automatically compress photos/videos for website when uploaded?

I am working on a website in which I give users the possibility to upload pictures and videos, how would I automatically compress those videos/pictures server-side before storing them on my server/database. because I don't want abnormally large files to slow down my website, if I was uploading myself I could obviously resize and optimize myself, but is there a way I can do this automatically for my users?
Well, that is a wide question and answer depends on type of the files and algorithm you decide to select.
For images, you can simply use JPG and select desired percentage quality (the smaller, the better size, but worse looking resulting picture). Example: http://blog.clonesinfo.com/how-to-reduce-compress-image-file-size-uploading-using-php-code/
If you want more options or for example lossless quality, you definitely should look for some library or tool, look in this question for some more info: Which is the best PHP method to reduce the image size without losing quality
For videos, it gets a little more complicated, as making a video smaller requires re-encoding it, and also picking the right settings (the codec you usually pick will be the most compatible and efficient one – H.264, or something like VP9 from Google). Note that re-encoding requires significant amount of processing power on your server (might start to be an issue if videos are large and long). Video encoding is a very wide topic which I cannot cover here in 1 response, you can start with googling around how H.264 works.
For video encoding you're also going to need a tool, probably the best choice will be ffmpeg/avconv, plus some PHP library to make it easier to use.

Base64 video encoding - good\bad idea? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm working on a mobile front-end project using cordova, and the backend developer i'm working with insists that the media files (images/videos) should be transferred as base64 encoded in json files.
Now, with images it's so far working. Although it freezes the UI for a few seconds, it can be deferred somehow.
The videos however are so far a pain to handle, the length of a single/simple video being transferred is nearly 300,000 in length. It brings my poor laptop through a wild spin, and gets the uri after about 20 seconds of going through the code (and it still not working, and I don't feel like debugging it cause it nearly crashes my laptop with every refresh).
So my questions are:
Is base64 encoding a popular way of transferring media in mobile development?
And if not, what alternative way would you recommend using to transfer/present these videos?
I should mention though, the videos are meant to be viewed at once by a large number of people (hundreds perhaps), and the other developer says that their servers can't handle such traffic.
Much thanks for any advice, I couldn't find this info nowhere. :)
[...] the backend developer [...] insists that the media files (images/videos) should be transferred as base64 encoded in json files.
This is a very bad (and silly) idea up-front. You do not want to transfer large amount of binary data as strings. Especially not Unicode strings.
Here you need to arm up and convince your backend dev rebel to change his mind with what-ever it takes, play some Biber or Nickelback, or even change his background image to something Hello Kitty, or take a snapshot of his screen, set it as background and hide all the icons and the bar. This should help you changing his mind. If not, place a webasto in his office at max and lock all doors and windows.
Is base64 encoding a popular way of transferring media in mobile development?
It is popular and has a relative long history and became very common on Usenet and so forth. In those days however, the data amount was very low compared to today as all data where transferred over modems.
However, just because it is popular doesn't mean it is the right tool for everything. It is not very efficient as it require an encoding process which convert three octets into four bytes, causing an addition of 33% to the size.
On top of that: in JavaScript each string char is stored as two bytes due to Unicode char-set so your data is doubled and extended 33%. Your 300 mb data is now 300 x 2 x 1.33 = 798 mb (show that to your backdev! :) as it's a real factor if the servers cannot handle large amount of traffic).
This works fine for smaller files but for larger file as in your example this can cause a significant overhead in both time and memory usage, and of course bandwidth. And of course, on server side you would need to reverse the process with its own overhead.
And if not, what alternative way would you recommend using to transfer/present these videos?
I would recommend:
Separate meta-data out as JSON with a reference to the data. No binary data in the JSON.
Transfer the media data itself separately in native bytes (ArrayBuffer).
Send both at the same time to server.
The server then only need to parse the JSON data into edible for the backend, the binary data can go straight to disk.
Update I forgot to mention, as Pablo does in his answer, that you can look into streaming the data.
However, streaming is pretty much a synonym with buffering so the bandwidth will be about the same, just provided in a more brute-force way (usually UDP versus TCP, ie. loss of packets doesn't break the transfer). Streaming with limit your options more than buffering in the client though.
My 2 cents...
Not sure why "33% overhead" is always mentioned, when that's complete nonsense. Yes, it does initially roughly add that amount, however there's a little thing called gzip (ever heard of it?). I've done tons of tests and the difference is typically negligible. In fact, sometimes the gzipped base64 string is actually smaller than the binary file. Check out this guy's tests. So please, can we stop spreading absolute fiction.
Base64 is a perfectly acceptable method of retrieving a video. In fact, it works amazing for a private messaging system. For instance, if you were using AWS S3, you could store the files privately so there is no URL.
However, the main disadvantage (imho) of using a gzipped base64 video is that you need to wait for the whole video to load, so pseudo-streaming is out of the question.
Base64 is a convenient (but not efficient) way of transferring binary data. It's inefficient because transfer size will be 33% bigger than what you're originally transferring. Si it's not a popular way of transmitting video. If you are planning to stream that video, you should be looking for a established protocol for doing just that.
I would recommend a streaming protocol (there are a lot where you can chose from).
I think is bad idea, video files is large. But you can try with small video files.
Try online encoder https://base64.online/encoders/encode-video-to-base64
There you can convert video to Base64 Data URI, and try to insert in HTML
Result like this:
<video controls><source src="data:video/mpeg;base64,AAABuiEAAQALgBexAAABuwAMgBexBeH/wMAg4ODgAAA..."></video>

Detect Content Embedded in Images

I was wondering if there was any possible way to detect if there is content uploaded into images. For instance, using WinRAR, I could embed any sort of file into an image, while maintaining the images format as an image. Sites like imgur manage to block this. I am wondering how they do this.
I think one possible way would be to upload the image data to a canvas, so that it's represented purely as an array of pixels, and then reconvert the canvas's data back into an image. However, this would be rather time consuming on the server side.
Does anybody know of an efficient way to do this?
As you mentioned node.js and server side you can do the following:
1) Use imagemagic with node binding node-imagemagick - it uses cli imagemagic so it will be fast. Library is widely used so you will find plenty of examples how to remove Exif and unnecessary data from file. In worst case scenario you can recompress file.
2) If you are working with jpeg image only you can use node-jpegoptim and optimise each uploaded file. It is also using cli so will be fast
3) Finally you can use node-smushit and use Yahoo servers to do the job however you need to check if their terms of service are ok with your content.
Those are 3 that came to my mind, I hope one of those will satisfy your needs.

WebGL is only rendering/loading certain PNG files

I have created an OOP environment for WebGL so I can easily create all the objects I need for future game projects i might get. Most of the work is already done, but I'm getting painfully frustrated now with the displaying of .png files, I tested multiple object instances with a certain png file, and it was working smoothly (even the transparancy was), but now that I try other png files it doesn't render those properly. just the untextured plane (black square).
I have tried multiple orders of calling them in, and some other png files it does load in, but i cant find any apparent difference in the png files, all have the same right access for the browser. Also I can't find any alike problems with this online..
Anyone experience with WebGL / OpenGL who knows what might be happening here?
EDIT:
I still haven't figured out why it can only read certain PNG files, but i do know the right settings needed to make it readable.
RGB Color, 8 Bit
Color profile: sRGB IEC61966-2.1
Are your textures powers of 2?
WebGL is designed for embedded systems so it's non-power of 2 support is limited.

Saving Div Content As Image On Server

I have been learning a bit of jQuery and .Net in VB. I have created a product customize tool of sorts that basically layers up divs and add's text, images etc on top of a tshirt.
I'm stuck on an important stage!
I need to be able to convert the content of the div that wraps all these divs of text and images to one flat image taking into account any CSS that has been applied to it also.
I have heard of things that I could use to screen capture the content of a browser on the server which could be possible for low res thumbs etc, but it sounds a little troublesome! and it would really be nice to create an image of high res.
I have also heard to converting the html to html5 canvas then writing that out... but looks too complicated for me to fathom and browser support is an issue.
Is this possible in .NET?
Perhaps something with javascript could be done?
Any help or guidance in the correct direction would be appreciated!
EDIT:
I'm thinking perhaps I could do with two solutions for this. Ideally I would end up with a normal res jpg/png etc for displaying on the website, But also a print ready high res file would be very desirable as well.
PostScript Printer - I have heard of it but I'm struggling to find a good resource to understand it for a beginner (especially with wiki black out). Perhaps I could create a html page from my div content and send it to print to a EPS file. Anyone know any good tutorials for this?
We did this... about 10 years ago. Interestingly, the tech available really hasn't changed too much.
update - Best Answer
Spreadshirt licenses their product: http://blog.spreadshirt.net/uk/2007/11/27/everyones-a-designer-free-designers-for-premium-partners/
Just license it. Don't do this yourself, unless you have real graphics manipulating and print production experience. I'd say in today's world you're looking at somewhere around 4,000 to 5,000 hours of dev time to duplicate what they did... And that's if you have two top tier people working on it.
Short answer: you can't do it in html.
Slightly longer answer:
It doesn't work in part because you can't screen cap the client side and get the level of resolution needed for production type printing. Modern screen resolution is usually on the order of 100 ppi. For a decent print you really need something between 3 and 6 times that density. Otherwise you'll have lots of pixelation and it will generally look like crap when it comes out.
A different Answer:
Your best bet is to leverage something like SVG (scalable vector graphics) and provide a type of drawing surface to the browser. There are several ways of doing this using Flash (Spreadshirt.com uses this) or Silverlight (not recommended). We used flash and it was pretty good.
You might be able to get away with using HTML 5. Regardless, whatever path you pick is going to be complicated.
Once the user is happy with their drawing and wants to print it out, you create the final file and run a process to convert it to Postscript or whatever format your t-shirt provider needs. The converter (aka RIP software) is going to either take a long time to develop or cost a bunch of money... pick one. (helpful hint: buy it. Back then, we spent around $20k US and it was far cheaper than trying to develop).
Of course, this ignores issues such as color matching and calibration. This was actually our primary problem. Everyone's monitor is slightly different and what looks like red on one machine is pink on another.
And for a little background, we were doing customized wrapping paper. The user added text, selected images from our library or uploaded their own, and picked a pattern. Our prints came out on large-format HP Inkjet printers (36" and 60" wide). Ultimately we spent between $200k and $300k just on dev resources to make it happen... and it did, unfortunately, the price point we had to sell at was too high for the market.
If you can use some server-side tool, check phantomjs. This is a headless webkit browser (with no gui) which can take a page's screenshot, an uses a javascript api. It should do the trick.
Send the whole div with user generated content back to server using ajax call.
Generate an HTML Document on server using 'HtmlTextWriter' class.
Then you can convert that HTML file using external tools like
(1) http://www.officeconvert.com/products_website_to_image.htm#easyhtmlsnapshot
(2) http://html-to-image.acasystems.com/faq-html-to-picture.htm
which are not free tools, but you can use them by creating new Process on server.
The best option I came across is wkhtmltopdf. It comes with a tool called wkhtmltoimage. It uses QtWebKit (A Qt port of the WebKit rendering engine) to render a web page, and converts the result to PDF or image format of your choice, all done at server side.
Because it uses WebKit, it renders everything (images, css and even javascript) just like a modern browser does. In my use case, the results have been very satisfying and are almost identical to what browsers would render.
To start, you may want to look at how to run external tools in .NET:
Execute an external EXE with C#.NET

Categories

Resources