Can html2canvas capture a javascript 3d 'screenshot'? - javascript

I have seen an example of html2canvas create a 'screenshot' of an html block (ex: html2canvas example).
What I would like to do is, using javascript, capture a 'screenshot' of a three.js (or any js 3d library) scene.
Keep in mind that the user might play with (rotate, etc) the 3d scene before deciding to 'screenshot' it.
Can html2canvas 'capture' a 3d scene?
EDIT
see plunker for codesee plunker for working solution

Well, since three.js draws a canvas, I'm pretty sure you'd be able to replicate it's current state without the use of html2canvas.
function cloneCanvas(oldCanvas) {
//create a new canvas
var newCanvas = document.createElement('canvas');
var context = newCanvas.getContext('2d');
//set dimensions
newCanvas.width = oldCanvas.width;
newCanvas.height = oldCanvas.height;
//apply the old canvas to the new one
context.drawImage(oldCanvas, 0, 0);
//return the new canvas
return newCanvas;
}
Thanks to Robert Hurst for his answer on the subject
Regarding html2canvas, this paragraph of the doc leads me to believe that it will be able to copy the canva since it is not tainted with cross-origin content.
Limitations
All the images that the script uses need to reside under the same origin for it to be able to read them without the assistance of a proxy. Similarly, if you have other canvas elements on the page, which have been tainted with cross-origin content, they will become dirty and no longer readable by html2canvas.

Related

High performance pixel manipulation in HTML5

I am new trying out a high-speed image display in HTML5/JS. I am fairly new to HTML5/JS and is trying to evaluate options available for rendering images on to screen.
Input is an array of pixels.
I played around with CanvasRenderingContext2D, and used a canvas and putImageData to render pixels onto screen, but I ended up with slower FPS with increase in image size. The use case I want to address is rendering raw pixel arrays onto screen, and not drawing shapes/drawings. My target is to render at the highest possible FPS.
var imageCanvas = document.getElementById('imageCanvas');
var imageContext = imageCanvas.getContext('2d', { alpha: false });
var pixelData = imageContext.createImageData(imageWidth, imageWidth);
// ---------------------------------------------------------
// here: Fill up pixelData with image pixels to be rendered
// ---------------------------------------------------------
imageContext.putImageData(pixelData , 0, 0);
For such an use-case, how should I proceed? I read at many places putImageData is the bottleneck here, and a better way would be to use DrawImage. Is it possible to avoid using putImageData and use DrawImage to read from pixelData and render?
How efficiently can I use GPU?
Does WebGLRenderingContext suit this scenario?
For high performance graphics on html canvas.
There's only one way to do .
Its by making use of webgl provided by glES.
using core opengl could be annoying so you may want to use some libraries like Three.js or pixijs

Is there a way to get WebGL to render offscreen, without the webgl canvas attached to the DOM?

I thought this would be an easy question to find an answer for, but it's turning out to be very elusive. Basically, I'm trying to use WebGL to do the heavy lifting on some image processing and generation tasks, and I want it to work offscreen. Ideally, I'd like WebGL to render a scene to a framebuffer that I can gl.readPixels() from, or to the webgl canvas so I can use it as a source for context.drawImage(). The thing is, I don't want to display the webgl canvas itself, I only want to copy portions of it to a visible canvas that I have a regular "2d" context for. So far, I can't seem to get it to work without the following initialization, which seems like an ugly hack to me:
glCanvas = document.createElement('canvas');
glCanvas.width = 256;
glCanvas.height = 256;
document.body.appendChild(glCanvas); // seems necessary, but why?
glCanvas.style.visibility = 'hidden'; // ugh, is this the only way?
Currently, I'm rendering a scene to a framebuffer I create with:gl.createFramebuffer(), using a single gl.drawArrays() call, and then using gl.readPixels() to get the result out to an ImageData. With the glCanvas attached to the DOM it works exactly as planned. But if I leave off those last 2 lines of code above where I attach the glCanvas and hide it, I get no image when I try to readPixels from the framebuffer I've rendered to.
I don't understand why the glCanvas needs to be attached to the DOM, if it's not even being rendered to. I thought that rendering to a framebuffer happens entirely on the graphics card and that the buffer will persist for me to use as a texture. Is it simply the case that WebGL doesn't render at all unless there is at least one webgl-context canvas attached to the DOM? Or am I missing some other detail? Thanks!
You don't have to have the canvas in the DOM to use WebGL. There's plenty of tests in the WebGL Conformance Tests that run without ever adding a canvas to the DOM
var gl = document.createElement("canvas").getContext("webgl");
gl.clearColor(1,0,1,1); // purple;
gl.clear(gl.COLOR_BUFFER_BIT);
// draw in 2d canvas
var ctx = document.querySelector("#c2d").getContext("2d");
ctx.drawImage(gl.canvas, 10, 20, 30, 40);
<canvas id="c2d"></canvas>
Maybe you should post some more code. You probably have some other bug.
Update
Coming soon is also the OffscreenCanvas API. As of 2017/11/15 It's available in both Firefox and Chrome behind a flag. It allows using WebGL in a Web Worker
Update 2
As of Chrome 69 OffscreenCanvas is shipping without a flag.
It seems like you want to use an OffScreenCanvas.
You can learn more here.
You should do something like this:
const offscreen = document.querySelector('canvas').transferControlToOffscreen();
const worker = new Worker('myworkerurl.js');
worker.postMessage({canvas: offscreen}, [offscreen]);

View in-memory canvas data?

I have an array of canvas tags being dynamically filled with image data, and I need to analyze the image data of the canvases for debugging. How may I view these in memory?
I can obviously paint them to an on-screen canvas, however that would be very time consuming and I'm looking for a solution similar to viewing variables in Chrome's watch list.
EDIT: I need to view the canvas data visually, rather than textually. I was asking for a way to do this without manually drawing the data of the in-memory canvases onto an on-screen canvas to save time. For example, in Firefox's developer toold, if you hover the source of an image or the CSS for the URL of an image, it shows a small tooltip with the image
I wanted something similar for the in-memory canvases
Create a large on-screen canvas and use the scaling version of context.drawImage to draw multiple down-sized in-memory canvases to the on-screen canvas. You can view 4,6,8 smaller canvases at once on the big canvas. Kind of like the quad-screen security camera displays.
[ Addition based on comment (and request for clarification) ]
I don't know a way to do what you ask.
A canvas is not a variable. It's basically a bitmap.
If you want to view the pixel data you can use context.getImageData. What you would see is thousands of array elements all with values between 0-255.
Here's an example of the pixel data of an image:
3,45,117,255,2,46,119,255,4,48,123,255,2,49,127,255,3,51,133,255,2,52,137,255,2,54,140,255,1,55,143,255,7,74,152,255,8,71,148,255,7,69,146,255,5,69,141,255,5,74,143,255,2,82,145,255,1,92,149,255,0,97,152,255,31,92,175,255,15,86,174,255,0,79,99,255,0,48,121,255,10,30,177,255,21,35,144,255,1,33,134,255,0,46,155,255,0,44,174,255,0,57,120,255,0,51,121,255,0,37,167,255,9,50,156,255,2,52,125,255,15,67,168,255,0,53,162,255,15,71,168,255,4,63,153,255,1,62,145,255,14,69,152,255,9,64,155,255,9,83,182,255,14,121,225,255,0,124,232,255,11,120,223,255,11,124,228,255,7,124,229,255,2,123,228,255,8,131,235,255,16,138,239,255,10,130,227,255,0,115,209,255,2,123,238,255,0,123,233,255, ... and on for thousands of elements more.
If this is what you need, here's how to get it:
var img=new Image();
img.onload=start;
img.src="yourImageSource";
function start(){
canvas.width=img.width;
canvas.height=img.height;
ctx.drawImage(img,0,0);
var a=ctx.getImageData(0,0,cw,ch).data;
var t="";
var comma="";
for(var i=0;i<a.length;i++){
t+=comma+a[i];
comma=",";
}
console.log(t);
}
If that's not what you want, perhaps you could edit your question and better explain your need?

draw SVG image into CanvasRenderingContext2D

Is it possible to do something like this
var image = new Image();
image.src = 'img.svg';
context.drawImage(image, x, y); // context is an instance of CanvasRenderingContext2D
with a SVG image? Actually this code works, but I think the image is converted to .jpg or similar, because if I try to zoom the browser page the image becomes coarse.
Clarification : The image should be re-drawn many times in the canvas context (i.e. for movements), so suggestions like "use this library" should consider this fact.
EDIT
From previous discussions, the issue seems to be due to canvas properties (canvas is not browser-zoomable) and not due to incorrect loading. Can I get and modify (eventually) this property of canvas to realize my purpose? I have to draw necessarily on canvas, no other options unfortunately.
Translate rendered svg according to the wanted zoom level.
How to detect page zoom level in all modern browsers?
explains browser-zoom level detection. And you can use fabricjs for canvas manipulation (http://fabricjs.com/).

Generate html from a pre-rendered canvas element or component such as textbox?

I am learning JavaScript and HTML5 and out of curiosity wondered:
1) Is it possible to generate HTML from a Canvas element (s)? So for example we have a Canvas shape, on the click of a button it generates the html5 code that rendered it?
2) We have a HTML DOM button on the page, converting this into its' html5 code?
Thanks
1.
No, canvas is just a bitmap.The browser is blissfully unaware of what shape was just drawn. To be able to convert shapes to HTML consider canvas just a view-port or rendition of the shapes you store internally as objects. See your internal objects as the root, then you can produce HTML, bitmap (canvas), SVG, JSON etc. from that very root. Canvas becomes just one channel of representing those data, not the source of it.
You can extract the content of the canvas as an image by calling:
dataUri = canvas.toDataURL();
or as a pixel array:
buffer = context.getImageData(x, y, w, h).data;
Note that in both of these cases CORS restrictions apply.
2.
There is a simple way of getting a current snapshot of the element's HTML:
/// assuming you have obtained the element
var html = button.outerHTML;
But it can easily get more complicated if you have applied a lot of external CSS to the element and so forth.

Categories

Resources