Saving the content of an html canvas can be done by calling the element's toDataURL() method, this will save it as an png file encoded in base64. Would it be possible to somehow merge multiple of these base64 png images into a base64 mp4 video? It dosn't really have to be mp4, as long as it's a video.
Good question, you could potentially use pixi.js or BabylonJS or something like that to create cool title sequences or data animations.
There's no build-in support for mpeg or mp4 output in HTML5 and JavaScript, as far as I know. However there is this library ccapture.js on github, though I haven't tried it yet. The description says that it doesn't render the ouput in realtime so this means you can even do more complex/cpu-heavy animations without skipped frames.
Related
I am building a small app that allows to add css3 filters like grayscale in a video and download it. The video won't be longer than 6 seconds. So, I am first loading the video in the canvas and then applying the filter that user demands. Now I want user to be able to download the filtered video. The canvas.toDataURL() is only meant for images. Is there any high level canvas api to achieve this?
Thank you
Not that I know of. I think this is something that should be done server-side. Either send the raw video to the server and tell it what filters were applied so you can re-create the effect on the server OR use the solution proposed here capturing html5 canvas output as video or swf or png sequence? (hint: it's also server side)
I am making a game that allows the player to link to their own gif images and immediately make them playable in the game, and need to convert animated .gif files into spritesheets.
I have a jsfiddle that will load any image you past into the input, but it only loads the first frame:
http://jsfiddle.net/40k7g0cL/
var animatedGif = PIXI.Sprite.fromImage('http://i.imgur.com/egzJbiI.gif');
But pixi.js asset loader can only seem to load the first frame of an animated .gif file and not the rest.
All the information I can find on this subject says I should convert the animated .gif file into a SpriteSheet ahead of time, however this is not possible because the player is going to be supplying the .gif images as they play, so I can not pre-process them ahead of time.
Is there an easy way to load an animated .gif image, having it automatically converted to a SpriteSheet or MovieClip or even an array of Texture objects?
If there is not a simple solution already in pixi.js, do I need to write my own plugin, perhaps using something like jsgif to process the .gif and separate each frame manually?
Any suggestions on how to go about generating a SpriteSheet from an animated .gif client-side in the browser (in javascript) could be useful.
Sorry, there is no way to achieve this directly with pixi.js.
As you suggest, it seems that jsgif is the only low-level implementation of gif for client-side javascript. Also, exists a fork of this called libgif-js, a little easier to analyze, and it can offer a clue for build the SpriteSheet.
The process to separate the frames would be:
Load the image data.
If your app is online, you have to use the File API (see here) to read local files.
You'll get an ArrayBuffer or String with the gif's raw data, that can be passed to new Stream(data).
Parse the data:
Call parseGIF(stream, handler). The second library can help a lot to understand this process.
Customize handler and callbacks to get what you need (width, height, frames...).
Create your SpriteSheet according to your rules:
If you chose save the frames as ImageData, use a hidden canvas (it can be the same that in the parse) to draw them in the right positions to form your SpriteSheet.
Take the final image and use it:
You can use, for example, canvas.toDataURL(*format*) (first, resize canvas to SpriteSheet dimensions) to get the image as a base64 url.
What i want to do is grab a video. And extract the frames from it as images. This is all being done in node so i don't have a video tag.
I am using openvg-canvas with this. any ideas on how to achieve this?
The problem is that openvg-canvas can print directly to the screen over the console. so i can print images and draw and use all the canvas api. The only thing i don't have access to is the video tag but cause canvas can't make videos, it can only use the imageData from each frame.
Any idea how i can get the imageData from a video but get the imageData from each frame. Alot of the packages i see only allow me to download and save the image but to save the image to disk and read it again would be a huge performance lost.
Any help would be appreciated. I am trying to get the base 64 image of the video
There is luuvish/node-ffmpeg package. It is C++ binding for ffmpeg (instead of cli wrapper). It provides low level API. So I assume you'll able to read data directly from memory with it's API and then convert it into base64.
I'm writing an application in HTML5 + JS, and at some point I need to upload the contents of a <canvas> element, which has some PNG file drawn on it. So I'm doing:
$('#img').val(canvasElement.toDataURL());
As You can see, I provide no arguments to toDataURL function, so the extracted image will be in PNG format. #img is a text input (with name attribute "img") that is used later to upload base64 representation of the image, extracted from canvas, to the server. I do it with AJAX, requesting a PHP file which contains the code below:
$data = explode(',', $_POST['img']);
$imgdata = base64_decode($data[1]);
$ifp = fopen('superGraphicFile.png', "wb");
fwrite($ifp, $imgdata);
fclose($ifp);
That works, but unfortunately 300x600 PNG file is enough to make AJAX request last really long (like 20 seconds), because the file size is around 400 kB then. The amount of time needed is unacceptable in my case.
I was wondering about how to reduce the amount of data sent to the server and I thought that sending it as JPG would be cool, as we can specify the quality of the JPEG image extracted from canvas. For example, calling toDataURL('image/jpeg',0.7), makes the file over 10 times smaller than calling it with no arguments. However, I have to preserve information about transparency from the original PNG file, and since JPG can't do it, I would like to invent some way to recreate the original PNG on the server side.
At first, I thought about filling all the transparent pixels from the original PNG with some specific color, convert it to JPG, send to the server, and replace all pixels having that color with transparent pixels. This, however, would probably not work, because I also need to preserve semi-transparent pixels from the original image. Maybe there is some method to extract the alpha channel from original image, send it to the server as another JPG and apply it as a mask on the server side, recreating the original PNG that way? Or maybe I'm missing some other solution?
I thank You in advance for all Your advices.
EDIT: The 20 seconds I write about might have been some problems with my internet connection, because I didn't change anything and now it takes around a second to transfer 400 kb of data. But I still think, that saving server resources by ten and making an app work faster would be a cool thing to do.
I'm afraid, that you're trying to solve your problem with a dirty workaround (no offense). Converting a PNG to JPEG and back to PNG will always be a lossy transformation (including the loss of alpha channel) and will consume much more CPU cycles than just sending of the original file.
The best would probably be to look inside the code and try to optimize it somehow. In todays networks and PCs transferring/handling of 400kB should not take 20 seconds. Also in some cases a JPEG may be much bigger than a PNG - for example a screen shot of a window will be e.g. 141kB in PNG, but 245kB in JPEG.
On the other hand if it's really better to use a lossy format, than you can look at this. I have found, that there are also some other types that support transparency.
Maybe you could try to use a tiff image format, which can include both the image data and the alpha channel (hopefully with JPEG compression). The only problem is, that I know only about one browser which supports TIFF out of the box (Safari).
Probably the best option is to convert to GIF. It's lossless and it will preserve transparency. Of course with no semi transparency and with limited colors.
Another option is to use JPEG 2000 as it has a higher compression with lower loss of quality and support alpha channel as well. Again Safari supports it, but Firefox needs an addon. I remember that a few years ago I had an addon for M$ Internet Explorer for JPEG 2k. I don't know about any other "major" browser with support of JPEG 2k.
I have implemented an MJPEG/AVI1 parser which extracts JPEG-formatted frames from a MJPEG file.
I can draw an image with extracted JPEG file on DOM with <canvas> element and I can also export image pixel data from it with context.getImageData.
Can I make some kind of video stream and append those extracted data in real-time so that user can play it without long delay? I know I can manually make an <video>-like UI with <canvas> element, but I found that Media Source Extensions currently allows native <video> tag receive encoded byte stream format. I'm curious if I can do that with raw pixel data.
That is an interesting idea.
So first, you need to create to mp4 initialization segment. From there you can convert the decoded jpg YUV frame to an h.264 frame. Then create a MSE fragment out of the frames. But you don't need to 'encode' to h.264, you can use raw slices, like what is outlined in this article.
http://www.cardinalpeak.com/blog/worlds-smallest-h-264-encoder/
THis should all be doable in javascript, in the browser, with enough work.