Is there a library like canvas2image (please look at createBMP function) for making .tiff in JavaScript (browser or nodejs)?
Native browser-support for tiff files is still pretty bad. Wikipedia has a nice overview regarding browser Image format support.
That being said; since a .tiff image is still essentially a raster-image, one could indeed convert it (the tricky part is stuff like supporting different compression-algorithms like PACKBITS, DEFLATE, LZW, etc) to another (browser-supported) raster-format (so one could feed it as a data:img-source).
There is a library called Tiffus: a client side pure Javascript imaging library to load, save and manipulate binary images.
The original project aim was was to create a plain Javascript chrome extension which can convert single/multi page TIFF image/s to BMP/GIF image (this is where the name came from).
However now it supports:
Windows BMP (no compression, RLE)
OS/2 BMP
ICO
GIF
JPEG
PNG
TIFF
and currently supports the folloing image functions:
load
save
resize
flip
invert color
Basically it works like this:
Source image downloaded as Binary Data using XMLHttpRequest with
MimeType('text/plain; charset=x-user-defined'); (future: HTML5
Canvas ImageData)
Imageprocessing using Tiffus
Destination image shown as Data URI scheme (future: HTML5 Canvas ImageData)
Note that according to the above; the author expects to use HTML5 Canvas ImageData in the future.
Hope this helps!
Related
I wrote a c++/c# Windows program that creates an html report using an xsl transform. The html report includes canvas element drawings. I give the user the option of converting the html report to a Word doc (if it's installed), but, although the conversion works fine, it ignores the canvas drawings. As a workaround, I would like to convert the canvas elements to png images and "export" them to the client PC (that is using my program to create the report). I know how to convert the canvas to a dataURL and then to a blob, but I can't figure out how to upload the blob file to the client PC, which of course is not a webserver.
Also, once the file is on the client, can the blob be treated as a png image?
[Edit] : I missed that the export is made to an .doc file
In this case, it depends on how you are converting your html to .doc.
In the case of an .docx, it should be possible to include your canvas' images as attachments in the .docx internal file tree, under word > media.
In the case of an .doc, the image file seems to be merged into the .doc file itself, might be possible to use an FileReader to append it.
If you are using a library, it depends on the library you use. The first one I found on google (html-docx-js), claims that they need your images as dataURI to be able to save it.
And to convert all your canvases to dataURI, you can use something along the lines of :
document.querySelectorAll('canvas').forEach( canvas =>{
var img = new Image();
try{
img.src = canvas.toDataURL();
canvas.replaceWith(img);
}
catch(e){
handleTaintedCanvas(); // I let you decide what should happen here
}
});
So once it's done, you would just have to call this lib, and it should work.
[Previous] This answers how to export to an stand-alone .html file.
For better compatibility across devices, convert your canvases to dataURI png images.
The Blob way
If you want to go the Blob way, you'd have to save a folder, containing your html file, and all the images as separated files, with correct references to them in the HTML markup.
That's possible too, e.g you could generate an zip file with the correct structure, and all included files, but some browsers don't accept loading external files from the file:// protocol, and your user would have to keep the directory structure untouched.
So all in all,
a lot of work for you,
little advantages (37% of image files size...)
complications for your users (all don't even know what to do with an zip file...)
<canvas> to dataURL
To convert your canvases to dataURI, a simple function would be
[Moved to edit]
And from now on, all your clean canvases will be converted to dataURI images, that you'll be able to export with your html file, which will be standalone.
<script> to dataURL
Note that there is also an third way to handle it (could be the best one actually depending on the nature of your original page):
You can convert all your <script> tags to dataURI, and keep your canvases in the document. This way, you'll still have an standalone working html file. But this implies that you don't need external resources.
My program Precomp can be used to further compress already compressed file formats like GIF, PNG, PDF, ZIP and more. Roughly summarized, it does this by decompressing the compressed streams, recompressing them and storing the differences between the expected compressed stream and the actual compressed stream. As an example, this rotating earth picture from Wikipedia is compressed from 1429 KB to 755 KB. The process is lossless, so the original GIF file can be restored.
The algorithm for the GIF file format can be isolated and implemented relatively easy, so I was thinking about a proof-of-concept implementation in JavaScript. This would lead to the web server sending a compressed version of the GIF file (.pcf ending, essentially a bzip2 compressed file of the
GIF image contents) and the client decompressing the data, recompressing to GIF and displaying it. The following things would've to be done:
The web site author would've to compress his GIF files using the standard version of Precomp and serve these instead of the GIF files together with a JavaScript for the client side recompression.
The client would decompress the bzip2 compressed file, this could be done using one of the existing bzip2 Javascript implementations.
The client would recompress the image content into the original GIF file.
The process is trade of bandwidth against CPU usage on the client side.
Now my questions are the following:
Are there any general problems with the process of loading a different file and "converting" it to GIF?
What would you recommend to display before the client side finishes (image placeholder)?
What do I have to do to make sure the .pcf file is cached? Bandwidth savings were useless if doesn't get cached.
Is there a way to display the original GIF if JavaScript is deactivated, but avoid loading the GIF if JavaScript is activated?
Can I give the users a way to configure the behaviour? E.g. on mobile devices, some might avoid bandwidth, but others might want less CPU usage.
Would it be possible to display interlaced GIFs as supposed (going from a rough version to the final image)? This would require updating the image content multiple times at different stages of recompression.
Let's begin by answering your specific questions. Code example below.
Q&A
Are there any general problems with the process of loading a different file and "converting" it to GIF?
The main problem is complication. You are effectively writing a browser addon, like those for JPEG2000.
If you are writing real browser addons, each major browsers do it differently, and change addon formats occasionally, so you have to actively maintain them.
If you are writing a JS library, it will be easier to write and maintain, but it will be unprivileged and suffer from limitations such as cross original restriction.
What would you recommend to display before the client side finishes (image placeholder)?
Depends on what your format can offer.
If you encode the image dimension and a small thumbnail early, you can display an accurate place-holder pretty early.
It is your format, afterall.
What do I have to do to make sure the .pcf file is cached? Bandwidth savings were useless if doesn't get cached.
Nothing different from other files.
Configure the Expires and Cache-Control header on server side and they will be cached.
Manifest and prefetch can also be used.
Is there a way to display the original GIF if JavaScript is deactivated, but avoid loading the GIF if JavaScript is activated?
This is tricky. When JavaScript is disabled, you can only replace elements, not attributes.
This means you cannot create an image somewhere that points to the .pcf files, and ask browser to rewrite the src attribute when JS is unavailable.
I think the best solution to support no JS is outputting the images with document.write, using noscript as fall back:
<noscript>
<img src=demo.gif width=90>
</noscript><script>
loadPcf("demo.pcf","width=90")
</script>
(Some library or framework may make you consider <img src=demo.gif data-pcf=demo.pcf>.
This will not work for you, because browsers will preload 'demo.gif' before your script kicks in, causing additional data transfer.)
Alternatively, browser addons are unaffected by "disable JS" settings, so if you make addons instead then you don't need to worry about it.
Can I give the users a way to configure the behaviour? E.g. on mobile devices, some might avoid bandwidth, but others might want less CPU usage.
Perhaps. You can code a user interface and store the preference in cookie or in localStorage.
Then you can detect preference and switch the logic in server code (if cookie) or in client code.
If you are doing addons, all browsers provide reliable preference mechanism.
The problem is that, again, every browser do it differently.
Would it be possible to display interlaced GIFs as supposed (going from a rough version to the final image)? This would require updating the image content multiple times at different stages of recompression.
If you hands browsers a partial image, they may think the image is corrupted and refuse to show it.
In this case you have to implement your own GIF decoder AND encoder so that you can hands browser a complete placeholder image, just to be safe.
(new) Can I decode image loaded from another site?
I must also repeat the warning that non-addon JS image decoding does not work with cross origin images.
This means, all .pcf files must be on the same server, same port, and same protocol with the site using it.
For example you cannot share images for multiple sites or do optimisations like domain sharding.
Code Example
Here is a minimal example that creates an <img>, loads a gif, half its width, and put it back to the <img>.
To support placeholder or progressive loading, listen to onprogress instead of/in addition to onload.
<!DOCTYPE html><html><head><meta charset="UTF-8"><script>
function loadPcf( file, attr ) {
var atr = attr || '', id = loadPcf.autoid = 1 + ~~loadPcf.autoid;
document.write( '<img id=pcf'+id+' ' + atr + ' />' );
var xhr = new XMLHttpRequest();
xhr.responseType = 'arraybuffer'; // IE 10+ only, sorry.
xhr.onload = function() { // I am loading gif for demo, you can load anything.
var data = xhr.response, img = document.querySelector( '#pcf' + id );
if ( ! img || ! data instanceof ArrayBuffer ) return;
var buf = new DataView( data ), head = buf.getUint32(0), width = buf.getUint16(6,1);
if ( head !== 1195984440 ) return console.log( 'Not a GIF: ' + file ); // 'GIF8' = 1195984440
// Modify data, image width in this case, and push it to the <img> as gif.
buf.setInt16( 6, ~~(width/2), 1 );
img.src = URL.createObjectURL( new Blob( [ buf.buffer ], { type: "image/gif" } ) );
};
xhr.open( 'GET', file );
xhr.send();
}
</script>
<h1>Foo<noscript><img src=a.gif width=90></noscript><script>loadPcf("a.gif","width=90")</script>Bar</h1>
If you don't need <noscript> compatibility (and thus prevent facebook/google+ from seeing the images when the page is shared), you can put the pcf file in <img> src and use JS to handle them en mass, so that you don't need to call loadPcf for each image and will make the html much simpler.
How about <video>?
What you envisioned is mostly doable, in theory, but perhaps you should reconsider.
Judging from the questions you ask, it will be quite difficult for you to define and pull off your vision smoothly.
It is perhaps better to encode your animation in WebM and use <video> instead.
Better browser support back to IE 9 - just add H.264 to make it a dual format video. You need IE 10+ to modify binary data.
Size: Movies has many, many options and tricks to minimise size, and what you learned can be reused in the future.
Progressive: <video> have had some techs for adaptive video, and hopefully they will stabilise soon.
JavaScript: <video> does not depend on JavaScript.
Future-proof: Both WebM and H.264 will be supported by many programs, long after you stopped working on your special format.
Cost-effective: Create a low-bandwith option using smaller or lower quality media is easier and more reliable than a custom format. This is why wikipedia and youtube offers their media in different resolutions.
For non-animations, PNG can also be colour indexed and 7z optimised (keeping the PNG format).
Icon size indexed PNG is often smaller than the same GIF.
Or perhaps your vision (as described in the pcf website) is the capability to encode many different files, not only GIF.
This will be more like supporting a new network protocol, and is likely beyond the scope of humble JavaScript. (e.g. how are you going to handle pdf download or streaming?)
I want to validate image file uploads client side.
T
here will be server side validation, too, which is working already with image magick.
I would like to reproduce this on the client side before uploading (since the files will be quite large and prerequisits for the image files are very restricted, it could save the user much pain if the validation takes place in the browser before the upload process)
Allowed files would be:
JPEG
EPS
TIFF
I need to detect:
Color Space (CMYK / RGB)
Size (width x height) // this one is easy - on JPEGs, but how about TIFF and EPS?
Resolution (dpi)
The main problem is detecting color space and handling the non-JPG formats. Is there something like ImageMagick's "identify" for javascript or do you have any other ideas...?!
Take a look at this. It uses HTML5 APIs, but it looks like what you are looking for.
Background:
I'm developing an HTML5 webapp for my company which is basically a Rich Text Editor (similar to Google Docs) that stores information in a database.
We are using CKEditor 3 as richtext editor and Jquery to acomplish this.
We've chosen Google's Chrome as the preferred browser.
Our app is currently in alpha testing period, having a group of 18 tester (which are the same ones that will use the app). These people is heterogeneous, but almost all of them of them have basic computer skills, mostly limited to MS Word and MS Excel.
.
Problem:
Most of our users still use word to elaborate the document, mainly due to its capacity of generating rich flowcharts. When they copy/paste the generated content to Chrome, images are pasted as link to a local file (auto generated by the OS, in a users/*/temp folder). This means the server can't access these files and the resulting documents (generated PDFs) don't contain the images.
.
Question
How can I force pasted images to be encoded in base64, similiar to what happens in Firefox?
.
Notes
If it's possible to "upload" to server an image referenced as src="file://c:\something", that would solve my problem as I can base64 encode that image later.
We can't switch to firefox since it doesn't fully solve our problem (if an image is "pasted" alongside with text, firefox doesn't base64 encode it) and raises other issues such as an horizontal scrollbar appearing when the text is too long to fit in the textarea.
Yes and no I believe.
It is possible to intercept the paste event and fetch the pasted image as a file, then use FileReader to read the file as a Data URI (base 64 encoded PNG).
However, Word seems to send a reference to a local file, which generates a security exception (at least on Chrome) because of a cross-domain request (http://... and file:///...). As far as I'm concerned there is no way to get the actual contents of such local files, and the contents are not sent as clipboard data itself.
If you copy a "pure" image (e.g. out of Paint), you can get the base 64 encoded data as follows: http://jsfiddle.net/pimvdb/zTAuR/. Or append the image as a base 64 encoded PNG in the div: http://jsfiddle.net/pimvdb/zTAuR/2/.
div.onpaste = function(e) {
var data = e.clipboardData.items[0].getAsFile();
var fr = new FileReader;
fr.onloadend = function() {
alert(fr.result.substring(0, 100)); // fr.result is all data
};
fr.readAsDataURL(data);
};
What is the best way to generate image data from the contents of an HTML canvas element?
I'd like to create the image data such that it can be transmitted to a server (it's not necessary for the user to be able to directly save to a file). The image data should be in a common format such as PNG or JPEG.
Solutions that work correctly in multiple browsers are preferred, but if every solution depends on the browser, recent versions of Firefox should be targeted.
Firefox and Opera have a toDataURL() method that returns a data-URL formatted PNG. You can assign the result to a form field to submit it to the server.
The data URL is base-64 encoded, so you will have to decode it on the server side. You would also need to strip off the "data:image/png;" part of course.
I think a lib you can use is Canvas2Image, it uses native features from Canvas, but it won't work on any browser. I have an optimized version of this lib, if you want to, I'll share it with you.
Then you could get the generated Data URI and send it using Ajax to the server.