program-image
I'm doing made windows RDP(Remote Desktop Protocol) Web Browser.
I want to stream ImageData(Uint8ClampedArray) to browser with live (every 100ms), and
i success it with using server-side render(node-canvas).
but current my source code is very low-performance. because CPU render can't handle it. (it too much many.) so i want to try GPU parallel computing using WebGL in client. (is it possible?)
i made first time like this, (i'll skip description the authentication procedure)
Serverside
step 1: hook the rdp imageData's (to compressed RLE algorithm)
step 2: decompress imagedata(it compressed RLE algorithm)(make Uint8ClampedArray)
step 3: canvas putImageData
step 4: get the dataUrl and cut 'data:image/png;base64'
step 5: make the buffer base64, so it's same image file buffer, save to express.
step 5: expree will be make image url (like a https://localhost/10-1.png#timestamp)
step 5: send imageurl to client using socket.io
Client-side
step 1: when the site load up, create 64x64 image tag's in div (like a google map)
step 2: receive the image url and get Image coordinate using parse image name('10-1.png'->x:640,y:64)
step 3: change image tag src to received image url.
it current performance is bad (actually not so bad when the resolution size is small).
Question
Is there any way Uint8ClampedArray imagedata to texture using three.js?
is it possible RLE algorithm compressed data extract using GPU in three.js?
I don't belive there is a way to directly draw imagedata using three.js.
But as I see it you have other options:
First of all I don't get way you are not just sending jpg or png data via websockets. Then you could actually use the png and draw it as a sprite with in three.js.
That said, I dont think that the bottleneck is the actual drawing of the data to the canvas and even if it were, webgl wont help you with that. I just tried it using webgl vs just using putImageData(). For a HD Image (1920x1080) it took on average of 14ms with putImageData() and 70ms with WebGL out of 1000 drawings. Now when you need to loop through the pixels because you want to do something in terms of image processing like edge detection than it is a whole different story. There webgl will be on top for sure. And here is why: WebGl is using the GPU that means when you want to do something with the image you first have to load all the data to the gpu which is rather slow. But processing the data is quite fast. Much faster then looping through the pixels of the image in javascript.
So in conclusion I would say your best bet is to send png images via websockets using arraybuffers. On client side draw it to a canvas element.
Here is a link to the webgl stuff that explains how you could use it for image processing: WebGl Fundamentals: Image Processing
Related
I am trying to build a demo which is a node.js C++ plugin. In node.js i am getting a bitmap, which is in ARGB format, and i need to pass that to html5 canvas. I am trying to find the most efficient way to do this, because it's painfully slow.
Currently I do the following:
--- in node.js ---
- Convert ARGB to RGBA (because ImageData accepts that format)
- Create v8::ArrayBuffer (wrapper over the underlying buffer)
- Create v8::Uint8ClampedArray (wrapper over the array buffer)
- Return an object which has the Uint8ClampedArray, width and height
--- in the browser ---
- Get the result from my function
- Create ImageData instance with the specified width and height
- Loop over all bytes in the Uint8ClampedArray and copy them to the image data
- context.putImageData(image_data, 0, 0);
I am pretty sure that there must be more optimal way to do this. It is not a problem to somehow keep alive the buffer in the addin, but i would like to at least avoid the byte-by-byte copy of the buffer to the image data.
I am also not sure why if i try to use the ImageData constructor that takes Uint8ClampedArray as first parameter blows up. In such a case, i am just getting:
v8::Object::GetAlignedPointerFromInternalField(). Error:Not a Smi
Thanks
You can use resterize js to bind data in canvas. I used this it works fast even for large size of images.
I have a (mostly) offline webapp where users can sign off with a digital signature (using this library: https://github.com/szimek/signature_pad)
The image size of a signature is about 50K, and is sent to the server as a base64 encoded json string.
Since this data is sent over satellite, I am looking to minimize the bandwidth used for each signature.
Is there any JavaScript library to do a lossy compression of the PNG to reduce the file size?
PNG is inherently lossless. If the destination can accept it, use a JPEG instead.
If not, you could try to decimate the image yourself, and then losslessly compress it with PNG. You can also try the PNG-8 mode to compress to a palette of 256 or fewer colors (which might require a lossy step), which should result in a smaller file.
I know this is a very old question, but I have something that may help. I am assuming that the signature pad library uses the native Canvas.toDataURL function.
I looked a png saved from a canvas with the pngcheck utility and it uses RGBA - 4 bytes per pixel. For your purposes you could use either a greyscale or palette color png that will be a much smaller size, but still lossless.
Large (2192 x 2800) greyscale PNG: 39KB
Same PNG drawn to Canvas (via DrawImage) and then saved back to an image via toDataURL: 184 KB.
Furthermore, the data inside a PNG is stored using the DEFLATE method - may libraries, like this one take a compression level as an argument, basically trading speed for size.
I need to alter the palette data in PNG images using JavaScript. I would like to do this without using WebGL and drawing to canvass, since this can be... inefficient, and cause slowdown. However, I'm not experienced with working with data compression and I know PNGs use compression.
I have three questions. One, do I need to fully decompress the image, or is it possible to only decompress parts up to the PLTE chunk(s) then re-compress the image?
Two, can JavaScript even work with raw binary data, or will I need to get really creative with base64 and string manipulation?
Third, since I'm working with palette and not truecolor images and so don't need to handle IDAT chunks... are the previous chunks actually compressed? Is this going to even require me to decompress the image?
For small size image what's (if any) the benefit in loading time using base64 encoded image in a javascript file (or in a plain HTML file)?
$(document).ready(function(){
var imgsrc = "../images/icon.png";
var img64 = "P/iaVYUy94mcZxqpf9cfCwtPdXVmBfD49NHxwMraWV/iJErLmNwAGT3//w3NB";
$('img.icon').attr('src', imgsrc); // Type 1
$('img.icon').attr('src', 'data:image/png;base64,' + img64); // Type 2 base64
});
The benefit is that you have to make one less HTTP request, since the image is "included" in a file you have made a request for anyway. Quantifying that depends on a whole lot of parameters such as caching, image size, network speed, and latency, so the only way is to measure (and the actual measurement would certainly not apply to everyone everywhere).
I should mention that another common approach to minimizing the number of HTTP requests is by using CSS sprites to put many images into one file. This would arguably be an even more efficient approach, since it also results in less bytes being transferred over (base64 bloats the byte size by a factor of about 1.33).
Of course, you do end up paying a price for this: decreased convenience of modifying your graphics assets.
You need to make multiple server requests, lets say you download a contrived bit of HTML such as:
<img src="bar.jpg" />
You already needed to make a request to get that. A TCP/IP socket was created, negotiated, downloaded that HTML, and closed. This happens for every file you download.
So off your browser goes to create a new connection and download that jpg, P/iaVYUy94mcZxqpf9cfCwtPdXVmBfD49NHxwMraWV/iJErLmNwAGT3//w3NB
The time to transfer that tiny bit of text was massive, not because of the file download, but simply because of the negotiation to get to the download part.
That's a lot of work for one image, so you can in-line the image with base64 encoding. This doesn't work with legacy browsers mind you, only modern ones.
The same idea behind base64 inline data is why we've done things like closure compiler (optimizes speed of download against execution time), and CSS Spirtes (get as much data from one request as we can, without being too slow).
There's other uses for base64 inline data, but your question was about performance.
Be careful not to think that the HTTP overhead is so massive and you should only make one request-- that's just silly. You don't want to go overboard and inline all the things, just really trivial bits. It's not something you should be using in a lot of places. Seperation of concerns is good, don't start abusing this because you think your pages will be faster (they'll actually be slower because the download for a single file is massive, and your page won't start pre-rendering till it's done).
It saves you a request to the server.
When you reference an image through the src-property, it'll load the page, and then do the additional request to fetch the image.
When you use the base64 encoded image, it'll save you that delay.
I have written an OpenGL game and I want to allow remote playing of the game through a canvas element. Input is easy, but video is hard.
What I am doing right now is launching the game via node.js and in my rendering loop I am sending to stdout a base64 encoded stream of bitmap data representing the current frame. The base64 frame is sent via websocket to the client page, and rendered (painstakingly slowly) pixel by pixel. Obviously this can't stand.
I've been kicking around the idea of trying to generate a video stream and then I can easily render it onto a canvas through a tag (ala http://mrdoob.github.com/three.js/examples/materials_video.html).
The problem I'm having with this idea is I don't know enough about codecs/streaming to determine at a high level if this is actually possible? I'm not sure if even the codec is the part that I need to worry about being able to have the content dynamically changed, and possibly on rendered a few frames ahead.
Other ideas I've had:
Trying to create an HTMLImageElement from the base64 frame
Attempting to optimize compression / redraw regions so that the pixel bandwidth is much lower (seems unrealistic to achieve the kind of performance I'd need to get 20+fps).
Then there's always the option of going flash...but I'd really prefer to avoid it. I'm looking for some opinions on technologies to pursue, ideas?
Try transforming RGB in YCbCr color space and stream pixel values as:
Y1 Y2 Y3 Y4 Y5 .... Cb1 Cb2 Cb3 Cb4 Cb5 .... Cr1 Cr2 Cr3 Cr4 Cr5 ...
There would be many repeating patterns, so any compressing algorithm will compress it better then RGBRGBRBG sequence.
http://en.wikipedia.org/wiki/YCbCr
Why base64 encode the data? I think you can push raw bytes over a WebSocket
If you've got a linear array of RGBA values in the right format you can dump those straight into an ImageData object for subsequent use with a single ctx.putImageData() call.