I need to alter the palette data in PNG images using JavaScript. I would like to do this without using WebGL and drawing to canvass, since this can be... inefficient, and cause slowdown. However, I'm not experienced with working with data compression and I know PNGs use compression.
I have three questions. One, do I need to fully decompress the image, or is it possible to only decompress parts up to the PLTE chunk(s) then re-compress the image?
Two, can JavaScript even work with raw binary data, or will I need to get really creative with base64 and string manipulation?
Third, since I'm working with palette and not truecolor images and so don't need to handle IDAT chunks... are the previous chunks actually compressed? Is this going to even require me to decompress the image?
Related
I'm getting one base64 string from API response and other one I'm converted image (which is in test data file) to base64 using cypress readfile method.
When I'm using below command the assertion is failing because there is tracking number difference which will be always new with every call.
And I'm getting 2 different base64.
//This base64 is from API response
var base64FromAPI =
res.body.completedShipments[0].completedPackages[0].documents[0].image;
//Image is picked from Test Data file and converts to base64
cy.readFile(`cypress/e2e/Testdata/Canada Post 02StoreId.pdf`, "base64").should(
"eq",
base64FromAPI
);
Because there is tracking number on the label(image) which will be generated from API response is always different.
Is there any other way to compare base64 strings and to ignore some % of difference while comparing in cypress or javascript.
Or is there any other way to do this.
Thanks in advance.
Essentially you can't do this at the base64 level. The differences in a raw bitstream like base64 are totally meaningless. The differences can only become apparent through rendering that image. Actually, what you need to do is pretty complex! I'm assuming it's not possible or a good idea in your use case to change away from having the server add the text to the image, to for example, using DOM to overlay it instead.
If that's the case, the only thing you could do is utilise visual regression testing. With this, you can set a threshold on which a % similarity is defined.
Since the base64 comes from the API. This would probably mean also having test code that injects an img tag with the base64 as the source, so you can allow the visual snapshot to take place.
This works at the level of image analysis rather than on the actual bitstream. Internally it will render and compare the images.
Another way I can think of, though this is quite complex and I wouldn't pursue it unless the above did not work is to:
Use image manipulation libraries to load the base64 into an actual rendered image in memory.
Try to cut away/crop the superimposed text using image manipulation libraries in order to reliably remove areas of difference.
Base 64 that.
Compare that to a known stable base64 of the "rest" of the image.
I have an image stored as a png (I could convert it to bmp too). I want to open it in JavaScript and get the raw RGB bytes. It is enough if this works locally in Chrome. So I open ./index.html in the browser which loads an image in the same directory, e.g. with <img src=myimage.png>. However, I need the proper original data, without any compression or artifacts. I can't use NodeJS.
I saw a similar question, Get image data in JavaScript, but it requires that the image is hosted somewhere. I'm also not sure how to get the raw RGB bytes, the results I got from trying those examples looked like they were still encoded as png.
EDIT: As one of the answers to the other SO question mentions, a canvas will re-encode the data and reading from it won't give me exactly the same values as in the original.
I have a (mostly) offline webapp where users can sign off with a digital signature (using this library: https://github.com/szimek/signature_pad)
The image size of a signature is about 50K, and is sent to the server as a base64 encoded json string.
Since this data is sent over satellite, I am looking to minimize the bandwidth used for each signature.
Is there any JavaScript library to do a lossy compression of the PNG to reduce the file size?
PNG is inherently lossless. If the destination can accept it, use a JPEG instead.
If not, you could try to decimate the image yourself, and then losslessly compress it with PNG. You can also try the PNG-8 mode to compress to a palette of 256 or fewer colors (which might require a lossy step), which should result in a smaller file.
I know this is a very old question, but I have something that may help. I am assuming that the signature pad library uses the native Canvas.toDataURL function.
I looked a png saved from a canvas with the pngcheck utility and it uses RGBA - 4 bytes per pixel. For your purposes you could use either a greyscale or palette color png that will be a much smaller size, but still lossless.
Large (2192 x 2800) greyscale PNG: 39KB
Same PNG drawn to Canvas (via DrawImage) and then saved back to an image via toDataURL: 184 KB.
Furthermore, the data inside a PNG is stored using the DEFLATE method - may libraries, like this one take a compression level as an argument, basically trading speed for size.
For small size image what's (if any) the benefit in loading time using base64 encoded image in a javascript file (or in a plain HTML file)?
$(document).ready(function(){
var imgsrc = "../images/icon.png";
var img64 = "P/iaVYUy94mcZxqpf9cfCwtPdXVmBfD49NHxwMraWV/iJErLmNwAGT3//w3NB";
$('img.icon').attr('src', imgsrc); // Type 1
$('img.icon').attr('src', 'data:image/png;base64,' + img64); // Type 2 base64
});
The benefit is that you have to make one less HTTP request, since the image is "included" in a file you have made a request for anyway. Quantifying that depends on a whole lot of parameters such as caching, image size, network speed, and latency, so the only way is to measure (and the actual measurement would certainly not apply to everyone everywhere).
I should mention that another common approach to minimizing the number of HTTP requests is by using CSS sprites to put many images into one file. This would arguably be an even more efficient approach, since it also results in less bytes being transferred over (base64 bloats the byte size by a factor of about 1.33).
Of course, you do end up paying a price for this: decreased convenience of modifying your graphics assets.
You need to make multiple server requests, lets say you download a contrived bit of HTML such as:
<img src="bar.jpg" />
You already needed to make a request to get that. A TCP/IP socket was created, negotiated, downloaded that HTML, and closed. This happens for every file you download.
So off your browser goes to create a new connection and download that jpg, P/iaVYUy94mcZxqpf9cfCwtPdXVmBfD49NHxwMraWV/iJErLmNwAGT3//w3NB
The time to transfer that tiny bit of text was massive, not because of the file download, but simply because of the negotiation to get to the download part.
That's a lot of work for one image, so you can in-line the image with base64 encoding. This doesn't work with legacy browsers mind you, only modern ones.
The same idea behind base64 inline data is why we've done things like closure compiler (optimizes speed of download against execution time), and CSS Spirtes (get as much data from one request as we can, without being too slow).
There's other uses for base64 inline data, but your question was about performance.
Be careful not to think that the HTTP overhead is so massive and you should only make one request-- that's just silly. You don't want to go overboard and inline all the things, just really trivial bits. It's not something you should be using in a lot of places. Seperation of concerns is good, don't start abusing this because you think your pages will be faster (they'll actually be slower because the download for a single file is massive, and your page won't start pre-rendering till it's done).
It saves you a request to the server.
When you reference an image through the src-property, it'll load the page, and then do the additional request to fetch the image.
When you use the base64 encoded image, it'll save you that delay.
I have written an OpenGL game and I want to allow remote playing of the game through a canvas element. Input is easy, but video is hard.
What I am doing right now is launching the game via node.js and in my rendering loop I am sending to stdout a base64 encoded stream of bitmap data representing the current frame. The base64 frame is sent via websocket to the client page, and rendered (painstakingly slowly) pixel by pixel. Obviously this can't stand.
I've been kicking around the idea of trying to generate a video stream and then I can easily render it onto a canvas through a tag (ala http://mrdoob.github.com/three.js/examples/materials_video.html).
The problem I'm having with this idea is I don't know enough about codecs/streaming to determine at a high level if this is actually possible? I'm not sure if even the codec is the part that I need to worry about being able to have the content dynamically changed, and possibly on rendered a few frames ahead.
Other ideas I've had:
Trying to create an HTMLImageElement from the base64 frame
Attempting to optimize compression / redraw regions so that the pixel bandwidth is much lower (seems unrealistic to achieve the kind of performance I'd need to get 20+fps).
Then there's always the option of going flash...but I'd really prefer to avoid it. I'm looking for some opinions on technologies to pursue, ideas?
Try transforming RGB in YCbCr color space and stream pixel values as:
Y1 Y2 Y3 Y4 Y5 .... Cb1 Cb2 Cb3 Cb4 Cb5 .... Cr1 Cr2 Cr3 Cr4 Cr5 ...
There would be many repeating patterns, so any compressing algorithm will compress it better then RGBRGBRBG sequence.
http://en.wikipedia.org/wiki/YCbCr
Why base64 encode the data? I think you can push raw bytes over a WebSocket
If you've got a linear array of RGBA values in the right format you can dump those straight into an ImageData object for subsequent use with a single ctx.putImageData() call.