I have a (mostly) offline webapp where users can sign off with a digital signature (using this library: https://github.com/szimek/signature_pad)
The image size of a signature is about 50K, and is sent to the server as a base64 encoded json string.
Since this data is sent over satellite, I am looking to minimize the bandwidth used for each signature.
Is there any JavaScript library to do a lossy compression of the PNG to reduce the file size?
PNG is inherently lossless. If the destination can accept it, use a JPEG instead.
If not, you could try to decimate the image yourself, and then losslessly compress it with PNG. You can also try the PNG-8 mode to compress to a palette of 256 or fewer colors (which might require a lossy step), which should result in a smaller file.
I know this is a very old question, but I have something that may help. I am assuming that the signature pad library uses the native Canvas.toDataURL function.
I looked a png saved from a canvas with the pngcheck utility and it uses RGBA - 4 bytes per pixel. For your purposes you could use either a greyscale or palette color png that will be a much smaller size, but still lossless.
Large (2192 x 2800) greyscale PNG: 39KB
Same PNG drawn to Canvas (via DrawImage) and then saved back to an image via toDataURL: 184 KB.
Furthermore, the data inside a PNG is stored using the DEFLATE method - may libraries, like this one take a compression level as an argument, basically trading speed for size.
Related
I have an image stored as a png (I could convert it to bmp too). I want to open it in JavaScript and get the raw RGB bytes. It is enough if this works locally in Chrome. So I open ./index.html in the browser which loads an image in the same directory, e.g. with <img src=myimage.png>. However, I need the proper original data, without any compression or artifacts. I can't use NodeJS.
I saw a similar question, Get image data in JavaScript, but it requires that the image is hosted somewhere. I'm also not sure how to get the raw RGB bytes, the results I got from trying those examples looked like they were still encoded as png.
EDIT: As one of the answers to the other SO question mentions, a canvas will re-encode the data and reading from it won't give me exactly the same values as in the original.
program-image
I'm doing made windows RDP(Remote Desktop Protocol) Web Browser.
I want to stream ImageData(Uint8ClampedArray) to browser with live (every 100ms), and
i success it with using server-side render(node-canvas).
but current my source code is very low-performance. because CPU render can't handle it. (it too much many.) so i want to try GPU parallel computing using WebGL in client. (is it possible?)
i made first time like this, (i'll skip description the authentication procedure)
Serverside
step 1: hook the rdp imageData's (to compressed RLE algorithm)
step 2: decompress imagedata(it compressed RLE algorithm)(make Uint8ClampedArray)
step 3: canvas putImageData
step 4: get the dataUrl and cut 'data:image/png;base64'
step 5: make the buffer base64, so it's same image file buffer, save to express.
step 5: expree will be make image url (like a https://localhost/10-1.png#timestamp)
step 5: send imageurl to client using socket.io
Client-side
step 1: when the site load up, create 64x64 image tag's in div (like a google map)
step 2: receive the image url and get Image coordinate using parse image name('10-1.png'->x:640,y:64)
step 3: change image tag src to received image url.
it current performance is bad (actually not so bad when the resolution size is small).
Question
Is there any way Uint8ClampedArray imagedata to texture using three.js?
is it possible RLE algorithm compressed data extract using GPU in three.js?
I don't belive there is a way to directly draw imagedata using three.js.
But as I see it you have other options:
First of all I don't get way you are not just sending jpg or png data via websockets. Then you could actually use the png and draw it as a sprite with in three.js.
That said, I dont think that the bottleneck is the actual drawing of the data to the canvas and even if it were, webgl wont help you with that. I just tried it using webgl vs just using putImageData(). For a HD Image (1920x1080) it took on average of 14ms with putImageData() and 70ms with WebGL out of 1000 drawings. Now when you need to loop through the pixels because you want to do something in terms of image processing like edge detection than it is a whole different story. There webgl will be on top for sure. And here is why: WebGl is using the GPU that means when you want to do something with the image you first have to load all the data to the gpu which is rather slow. But processing the data is quite fast. Much faster then looping through the pixels of the image in javascript.
So in conclusion I would say your best bet is to send png images via websockets using arraybuffers. On client side draw it to a canvas element.
Here is a link to the webgl stuff that explains how you could use it for image processing: WebGl Fundamentals: Image Processing
How can we check whether uploaded image is compressed or not?
I want to draw an image on canvas, then compress it using canvasContext.toDataUrl(type, quality) but this compression will only be applicable to that image if it is not already compressed.
Do you have any suggestion?
You can pretty much determine if the file is compressed by looking at the file type. If you inspect the string toDataURL() produces, you will see a mime-type defining either a PNG or JPEG file - in some cases where browsers support other file formats you can also see BMP and ICO file formats.
We know that a PNG file is always compressed as the PNG standard only support compression type 0 which is LZ77 compression (on top of line filters which affects the final compressed size).
JPEG always compresses as it uses DCT.
Compression for BMP is optional as well as for TIFF, though no browsers I know of support TIFF out of the box. It's reasonable to assume BMP and ICO files are uncompressed. They do exist in compression forms such as RLE but these are rare and can cause problems for some BMP parsers. To be absolutely sure though, you would have to parse the binary data to look in the header for compression flags.
Notice that toDataURL() always work on a raw uncompressed bitmap. It does not matter if the original image drawn to the canvas was compressed or not - the original image is always converted to a raw bitmap before drawn (actually when it's loaded).
After calling toDataURL() however, the binary image that it produces internally is converted to a Base-64 string. This means an increase of size by 33% due to how Base-64 works. On top of that: each char in JavaScript occupies 2 bytes (this is not a problem of course when in a JavaScript environment). So the length of the string is not a good indicator as it may possibly exceed the raw size (width x height x 4) in some cases (toBlob() is in any case a better alternative than toDataURL() due to its higher performance and reduced size as well as being async/non-blocking).
I am developing a DAM which is hosted in AWS. The user is able to upload heavy files to the system. Under the hood, when an image is uploaded, there is an AWS Lambda function creating a thumbnail for each image.
Obviously files with format .psd and .eps cannot be displayed on the browser with the typical HTML img item. That is why I will need to convert those file formats to .png or .jpg.
Maybe another solution would be to take a "screenshot on the fly" directly in .png. I do not know if this is possible.
The Node.js code running on the Lambda function is very similar the one here: http://docs.aws.amazon.com/lambda/latest/dg/walkthrough-s3-events-adminuser-create-test-function-create-function.html
Thanks in advance for your help!!
I do not know much about AWS, Lambda and Node.js but can maybe help somewhat with the ImageMagick aspects...
To convert an image from one format to another with ImageMagick, you basically use the convert program with appropriate filename extensions like this in the Terminal, or at the command-line:
convert input.jpg output.png # convert a JPEG to a PNG
EPS files
With EPS, which is a vector format, you generally should set the density first, else ImageMagick will use 72 dpi which makes for horrible quality, so for EPS try something like:
convert -density 144 input.eps output.png
PSD files
With Photoshop PSD files, there is generally a preview image and all the multiple layers following afterwards, so, if you are looking to get a Preview, you should use this style of command to address the layer 0 preview in the PSD file:
convert input.psd[0] output.png
If you want to reduce the size of an image, you would resize it after loading like this:
convert input.png -resize 512x256 output.png
to make it no larger than 512 pixels wide or 256 pixels tall.
Another thing you may like to do is to strip the metadata (time/date, camera model, creating application, GPS position of camera) out of the images, for that, add in -strip just before the output filename.
Not sure what else I can help with, but hope that gets you started.
I need to alter the palette data in PNG images using JavaScript. I would like to do this without using WebGL and drawing to canvass, since this can be... inefficient, and cause slowdown. However, I'm not experienced with working with data compression and I know PNGs use compression.
I have three questions. One, do I need to fully decompress the image, or is it possible to only decompress parts up to the PLTE chunk(s) then re-compress the image?
Two, can JavaScript even work with raw binary data, or will I need to get really creative with base64 and string manipulation?
Third, since I'm working with palette and not truecolor images and so don't need to handle IDAT chunks... are the previous chunks actually compressed? Is this going to even require me to decompress the image?