NodeJS extract array buffer for video as images - javascript

What i want to do is grab a video. And extract the frames from it as images. This is all being done in node so i don't have a video tag.
I am using openvg-canvas with this. any ideas on how to achieve this?
The problem is that openvg-canvas can print directly to the screen over the console. so i can print images and draw and use all the canvas api. The only thing i don't have access to is the video tag but cause canvas can't make videos, it can only use the imageData from each frame.
Any idea how i can get the imageData from a video but get the imageData from each frame. Alot of the packages i see only allow me to download and save the image but to save the image to disk and read it again would be a huge performance lost.
Any help would be appreciated. I am trying to get the base 64 image of the video

There is luuvish/node-ffmpeg package. It is C++ binding for ffmpeg (instead of cli wrapper). It provides low level API. So I assume you'll able to read data directly from memory with it's API and then convert it into base64.

Related

HTML Canvas frames to mp4

Saving the content of an html canvas can be done by calling the element's toDataURL() method, this will save it as an png file encoded in base64. Would it be possible to somehow merge multiple of these base64 png images into a base64 mp4 video? It dosn't really have to be mp4, as long as it's a video.
Good question, you could potentially use pixi.js or BabylonJS or something like that to create cool title sequences or data animations.
There's no build-in support for mpeg or mp4 output in HTML5 and JavaScript, as far as I know. However there is this library ccapture.js on github, though I haven't tried it yet. The description says that it doesn't render the ouput in realtime so this means you can even do more complex/cpu-heavy animations without skipped frames.

Allow video download from html5 canvas

I am building a small app that allows to add css3 filters like grayscale in a video and download it. The video won't be longer than 6 seconds. So, I am first loading the video in the canvas and then applying the filter that user demands. Now I want user to be able to download the filtered video. The canvas.toDataURL() is only meant for images. Is there any high level canvas api to achieve this?
Thank you
Not that I know of. I think this is something that should be done server-side. Either send the raw video to the server and tell it what filters were applied so you can re-create the effect on the server OR use the solution proposed here capturing html5 canvas output as video or swf or png sequence? (hint: it's also server side)

Pixi.js generate SpriteSheet Animation or MovieClip from an animated gif file

I am making a game that allows the player to link to their own gif images and immediately make them playable in the game, and need to convert animated .gif files into spritesheets.
I have a jsfiddle that will load any image you past into the input, but it only loads the first frame:
http://jsfiddle.net/40k7g0cL/
var animatedGif = PIXI.Sprite.fromImage('http://i.imgur.com/egzJbiI.gif');
But pixi.js asset loader can only seem to load the first frame of an animated .gif file and not the rest.
All the information I can find on this subject says I should convert the animated .gif file into a SpriteSheet ahead of time, however this is not possible because the player is going to be supplying the .gif images as they play, so I can not pre-process them ahead of time.
Is there an easy way to load an animated .gif image, having it automatically converted to a SpriteSheet or MovieClip or even an array of Texture objects?
If there is not a simple solution already in pixi.js, do I need to write my own plugin, perhaps using something like jsgif to process the .gif and separate each frame manually?
Any suggestions on how to go about generating a SpriteSheet from an animated .gif client-side in the browser (in javascript) could be useful.
Sorry, there is no way to achieve this directly with pixi.js.
As you suggest, it seems that jsgif is the only low-level implementation of gif for client-side javascript. Also, exists a fork of this called libgif-js, a little easier to analyze, and it can offer a clue for build the SpriteSheet.
The process to separate the frames would be:
Load the image data.
If your app is online, you have to use the File API (see here) to read local files.
You'll get an ArrayBuffer or String with the gif's raw data, that can be passed to new Stream(data).
Parse the data:
Call parseGIF(stream, handler). The second library can help a lot to understand this process.
Customize handler and callbacks to get what you need (width, height, frames...).
Create your SpriteSheet according to your rules:
If you chose save the frames as ImageData, use a hidden canvas (it can be the same that in the parse) to draw them in the right positions to form your SpriteSheet.
Take the final image and use it:
You can use, for example, canvas.toDataURL(*format*) (first, resize canvas to SpriteSheet dimensions) to get the image as a base64 url.

Can I do some kind of real-time media decoding with JavaScript?

I have implemented an MJPEG/AVI1 parser which extracts JPEG-formatted frames from a MJPEG file.
I can draw an image with extracted JPEG file on DOM with <canvas> element and I can also export image pixel data from it with context.getImageData.
Can I make some kind of video stream and append those extracted data in real-time so that user can play it without long delay? I know I can manually make an <video>-like UI with <canvas> element, but I found that Media Source Extensions currently allows native <video> tag receive encoded byte stream format. I'm curious if I can do that with raw pixel data.
That is an interesting idea.
So first, you need to create to mp4 initialization segment. From there you can convert the decoded jpg YUV frame to an h.264 frame. Then create a MSE fragment out of the frames. But you don't need to 'encode' to h.264, you can use raw slices, like what is outlined in this article.
http://www.cardinalpeak.com/blog/worlds-smallest-h-264-encoder/
THis should all be doable in javascript, in the browser, with enough work.

dynamically generating multiple thumbnails from a video src with javascript

Before you say it can't be done please take a look at my train of thought and entertain me.
I have read on stackoverflow that it can't be done and how to implement this using ffmpeg and other stuff on the server side which is great and simpleish enough to comprehend .. ive even used an extensiion to Video.js i found on github that makes this one step easier. But none the less what if I dont have a copy of the <video src=... > and I really dont care to get one?
I Do not want to use a server to do this Okay with that out of the way, I understand thanks to a post from Paul Irish that video playback is not a shared aspect of web-kit ports (the code which powers basically every browser ... minus chrome canary now using blink a webkit fork) This kinda makes sense why certain browsers only support certain video containers.
So for the sake of simplicity: I want to make this functionality only available on Chrome and only MPEG-4 AVC video containers, why can't this be done if some how I can actually view each frame of the video while its playedback?
additional note
So the generating of video thumbnails is possible using by drawing frames to a canvas, this will only be part of a final solution to my problem, I'm looking to do this each and everytime a video is viewed not store images on my server after a first playback is completed by a user. What I would like to eventually work up to is generating a thumbnail as the video is downloaded that can be viewed while a user uses a dragging scrollbar to ff/rw to a point in the video. So this will need to be done as frames of video come available, not once they have been rendered by the browser for user to view
One can actually feed in a video to the canvas, as seen here in HTML5Doctor. Basically, the line that does the magic is:
canvasContext.drawImage(videoElement,0,0,width,height);
Then you can run a timer that periodically retrieves the frames from the canvas. There are 2 options on this one
get raw pixel data
get the base64 encoded data
As for saving, send the data to the server to reconstruct an image using that data, and save to disk. I also suggest you size your canvas and video to the size you want your screenshots to be since the video-canvas transfer automatically manages scaling.
Of course, this is limited by the video formats that are supported by the browser. As well as support for canvas and video.
Generating thumbnails during first render? You'd run into problems with that since:
You can't generate all frames unless it's rendered on the video element.
Suppose you have generated thumbnails during first run and want to use them for further runs. Base64 data is very long, usually 3 times the file size if the image. Raw pixel data array is width x height x 4 in length. The most viable storage candidate is localStorage, which is just 5-10MB depending on the browser.
No way to cache the images generated into the browser cache (there could be a cache hack that I don't know using data-urls).
I suggest you do it on the server instead. It's too much burden and hassle to do in the client side.

Categories

Resources