Well, I fill ScreenBuffer:ImageData 480x360 and then want to draw it to the canvas 960x720. The task is to decrease the fillrate; the nowadays pixels are very small and we can make them bigger with some quality loss. I look for the operator with 2D-accelaration. But we can't write directly to js.html.Image and ImageData hasn't link to js.html.Image. I found an example for pure JS:
https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Tutorial/Pixel_manipulation_with_canvas
However, it doesn't want to work in Haxe because there isn't 'zoom' element. And there is some information about restrictions in HTML at copying from one image to another.
Many thanks for answers!
The compiler writes "js.html.Element has no field getContext"
getElementById()'s return type is the generic js.html.Element class. Since in your case, you know you're dealing with a <canvas>, you can safely cast it to the more specific CanvasElement. This then lets you call its getContext() method:
var canvas:CanvasElement = cast js.Browser.document.getElementById('zoom');
var zoomctx = canvas.getContext('2d');
Related
I'm using Mapbox GL, and trying to get a snapshot of it, and merge the snapshot with another image overlaid for output.
I have a HTMLCanvasElement off screen, and I'm first writing the canvas returned from Map.getCanvas() to it, then writing the second (alpha transparent) canvas over that.
The problem is that, though I clearly see elements onscreen in the Map instance, the result only shows the second image/canvas written, and the rest is blank.
So I export just the map's canvas, and I see it is because the map canvas is blank, although a console.log() shows the image data from it to be a large chunk of information.
Here's my export function:
onExport(annotationCanvas: HTMLCanvasElement) {
const mergeCanvas: HTMLCanvasElement = document.createElement('canvas');
const mapCanvas: HTMLCanvasElement = this.host.map.getCanvas();
const mergeCtx: CanvasRenderingContext2D = mergeCanvas.getContext('2d');
mergeCanvas.height = annotationCanvas.height;
mergeCanvas.width = annotationCanvas.width;
mergeCtx.drawImage(mapCanvas, 0, 0);
mergeCtx.drawImage(annotationCanvas, 0, 0);
const mergedDataURL = mergeCanvas.toDataURL();
const mapDataURL = mapCanvas.toDataURL();
const annotationDataURL = annotationCanvas.toDataURL();
console.log(mapDataURL); // Lots of data
download(mapDataURL, 'map-data-only.png'); // Blank image # 1920x1080
download(mergedDataURL, 'annotation.png'); // Only shows annotation (the second layer/canvas) data
}
Is this a bug, or am I missing something?
UPDATE: I sort of figured out what this is about, and have possible options.
Upon stumbling upon a Mapbox feature request, I learned that if you instantiate your Map with the preserveDrawingBuffer option set to false (the default), you wont be able to get a canvas with usable image data. But setting this option to true degrades performance. But you can't change this setting after a Map is instantiated...
I want the Map to perform the best it possibly can!!!!
So, on this answer I stumbled on, regarding a question about three.js, I learned that if I take the screenshot immediately after rendering, I will get the canvas/data that I need.
I tried just calling this.host.map['_rerender']() right before I capture the canvas, but it still returned blankness.
Then searching around in the source code, I found a function called _requestRenderFrame, that looks like it might be what I need, because I can ask the Map to run a function immediately after the next render cycle. But as I come to find out, for some reason, that function is omitted in the compiled code, whilst present in the source, apparently because it is only in the master, and not part of the release.
So I don't really have a satisfactory solution yet, so please let me know of any insights.
As you mentioned in your updated question the solution is to set preserveDrawingBuffer: true upon Map initialisation.
To answer your updated question I think #jfirebaugh's answer at https://github.com/mapbox/mapbox-gl-js/issues/6448#issuecomment-378307311 sums it up very well:
preserveDrawingBuffer can't be modified on the fly. It has to be set at the time the WebGL context is created, and that can have a negative effect on performance.
It's rumored that you can grab the the canvas data URL immediately after rendering, without needing preserveDrawingBuffer, but I haven't verified that, and I suspect it's not guaranteed by the spec.
So although it might be possible to grab the canvas data URL immediately after rendering, it's not guaranteed by the spec.
Yes, I know about .getImageData()
I mean, let's say, I have to change some pixels:
var imageData = ctx.getImageData(...);
Seems, this method give me completely new copy of "real" (hiden somewhere deep from me) image-data.
I say that, because if you create new one:
var imgData2 = ctx.getImageData(.../*same parameters as before*/);
and compare two buffers:
imageData.data.buffer === imgData2.data.buffer; //false
So, each time it create new copy from it's bitmap. Oh my Gosh, Why? Okay, go further:
/*...apply some new changes to the imageData in a loop...*/
Nothing special above. But now, it's time to put this back:
ctx.putImageData(imageData, ...);
And this one inside itself run new loop and copy my imageData.
So much extra work! Is there a way to get actual imageData and manipulate it without get/put? And if no - I'm ask again - WHY? Is it security reasons? What they afraid I can do with that pixels?
Thank you!
Short answer :
No.
Not so long Answer :
You may be able to achieve it with some hacks but that would be such a pain.
Explanations :
According to the specs, getImageData returns a TypedArray which contains the canvas' image data, not a pointer to the live imageData as a modifiable object.
To paint on a canvas, you have to use methods such as fill, stroke, drawImage or putImageData ; that would make even less sense that each time you iterate through the array, the actual canvas image get modified.
Each time you call getImageData, a new TypedArray (note that the choice of the actual type is up to the UA) is created and filled with the current data of the canvasImage. This way, you can call this method, make different alterations on the ArrayBuffer, without modifying the actual image into the canvas (so you can store it, or call the method again).
As to why the buffer of the returned ImageData is not the same on each call, I think it is because "Pixels must be returned as non-premultiplied alpha values", but for performances, they do store it as premultiplied. You can see here the de-premultiplication operation from Firefox source code, which actually fills a new Uint8ClampedArray.
Also, it avoids to check if the canvasImage as been modified since the last call, and ensure you always get its current ImageData.
I'm still relatively new to working with the canvas tag. What I've done so far is draw an image to the canvas. My goal is to have a fake night/day animation that cycles repeatedly.
I've exhausted quite a few different avenues (SVG, CSS3 filters, etc) and think that canvas pixel manipulation is the best route in my case. I'm trying to:
Loop through all pixels in the image
Select a certain color range
Adjust to new color
Update the canvas
Here's the code I have so far:
function gameLoop(){
requestAnimationFrame(gameLoop);
////////////////////////////////////////////////////////////////
// LOOP PIXEL DATA - PIXEL'S RGBA IS STORED IN SEQUENTIAL ARRAYS
////////////////////////////////////////////////////////////////
for(var i=0; i<data.length; i+=4){
red=data[i+0];
green=data[i+1];
blue=data[i+2];
alpha=data[i+3];
// GET HUE BY CONVERTING TO HSL
var hsl=rgbToHsl(red, green, blue);
var hue=hsl.h*360;
// CHANGE SET COLORRANGE TO NEW COLORSHIFT
if(hue>colorRangeStart && hue<colorRangeEnd){
var newRgb=hslToRgb(hsl.h+colorShift, hsl.s, hsl.l);
data[i+0]=newRgb.r;
data[i+1]=newRgb.g;
data[i+2]=newRgb.b;
data[i+3]=255;
};
};
// UPDATE CANVAS
ctx.putImageData(imgData, 0, 0);
};
The code works and selects a hue ranges and shifts it once, but is incredibly laggy. The canvas dimensions are roughly 500x1024.
My questions:
Is it possible to improve performance?
Is there a better way to perform a defined hue shift animation?
Thanks!
It's hard to do this real-time using high quality HSL conversion. Been there done that, so I came up with a quantized approach which allow you to do this in real-time.
You can find the solution here (GPL3.0 licensed):
https://github.com/epistemex/FastHSL2RGB
Example of usage can be found here (MIT license) incl. demo:
https://github.com/epistemex/HueWheel
Apologies for referencing my own solutions here, but the inner workings (the how to's) is too extensive to present in a simple form here and both of these are free to use for anything..
The key points are in any case:
Quantize the range you want to use (don't use full 360 degrees and not floating points for lightness etc.)
Cache the values in a 3D array (initial setup using web workers or use rough values)
Quantize the input values so they fit in the range of the inner 3D array
Process the bitmap using these values
It is not accurate but good enough for animations (or previews which is what I wrote it for).
There are other techniques such as pre-caching the complete processed bitmap for key positions, then interpolate the colors between those instead. This, of course, requires much more memory but is a fast way.
Hope this helps!
I have the following code for writing draw calls to a "back buffer" canvas, then placing those in a main canvas using drawImage. This is for optimization purposes and to ensure all images get placed in sequence.
Before placing the buffer canvas on top of the main one, I'm using fillRect to create a dark-blue background on the main canvas.
However, the blue background is rendering after the sprites. This is unexpected, as I am making its fillRect call first.
Here is my code:
render: function() {
this.buffer.clearRect(0,0,this.w,this.h);
this.context.fillStyle = "#000044";
this.context.fillRect(0,0,this.w,this.h);
for (var i in this.renderQueue) {
for (var ii = 0; ii < this.renderQueue[i].length; ii++) {
sprite = this.renderQueue[i][ii];
// Draw it!
this.buffer.fillStyle = "green";
this.buffer.fillRect(sprite.x, sprite.y, sprite.w, sprite.h);
}
}
this.context.drawImage(this.bufferCanvas,0,0);
}
This also happens when I use fillRect on the buffer canvas, instead of the main one.
Changing the globalCompositeOperation between 'source-over' and 'destination-over' (for both contexts) does nothing to change this.
Paradoxically, if I instead place the blue fillRect inside the nested for loops with the other draw calls, it works as expected...
Thanks in advance!
Addenum: Changing the composite operation does behave as expected, but not for remedying this specific issue. Sorry for the ambiguity.
There's a lot that's suspect here.
First off double buffering a canvas does nothing but hurt performance by adding complication, all browsers do double buffering automatically, so if that's your goal here you shouldn't be drawing to a buffer.
Here's an example of why you don't need double buffering: http://jsfiddle.net/simonsarris/XzAjv/
So getting to the meat of the matter, lines of javascript inside a discrete function don't simply run out of order. Something else is wrong here.
Setting a breakpoint on the drawImage would solve this pretty much instantly, so if you aren't familiar with firebug or chrome developer tools I'd highly recommend giving them a look.
I'm guessing that the "blue" you're seeing is actually the only thing drawn to your "buffer" canvas and perhaps this.buffer is not actually the buffer context.
Another possibility is that this.w and this.h are accidentally very small, so that your initial clearRect and fillRect at the start of the method are doing nothing.
In any case speculation is nowhere near as good as opening up developer tools and actually looking at what's happening.
Generally speaking if you need things to be in order use an array not an object. Iterating over an object is not guaranteed to be in any particular order.
Use an array and for (var i=0; i
I'm currently implementing a 2d deformable terrain effect in a game I'm working on and its going alright but I can envision it becoming a performance hog very fast as I start to add more layers to the effect.
Now what I'm looking for is a way to possibly save a path, or clipping mask or similar instead of having to store each point of the path in the terrain that i need to draw through each frame. And as I add more layers I will have to iterate over the path more and more which could contain thousands of points.
Some very simple code to demonstrate what I'm currently doing
for (var i = 0; i < aMousePoints.length; i++)
{
cRenderContext.save();
cRenderContext.beginPath();
var cMousePoint = aMousePoints[i];
cRenderContext.arc(cMousePoint.x, cMousePoint.y, 30, 0, 2 * Math.PI, false);
cRenderContext.clip();
cRenderContext.drawImage(cImg, 0, 0);
cRenderContext.closePath();
cRenderContext.restore();
}
Basically I'm after an effecient way to draw my clipping mask for my image over and over each frame
Notice how your clipping region stays exactly the same except for its x/y location. This is a big plus.
The clipping region is one of the things that is saved and restored with context.save() and context.restore() so it is possible to save it that way (in other words defining it only once). When you want to place it, you will use ctx.translate() instead of arc's x,y.
But it is probably more efficient to do it a second way:
Have an in-memory canvas (never added to the DOM or shown on the page) that is solely for containing the clipping region and is the size of the clipping region
Apply the clipping region to this in-memory canvas, and then draw the image onto this canvas.
Then use drawImage with the in-memory canvas onto your game context. In other words: cRenderContext.drawImage(in-memory-canvas, x, y); where x and y are the appropriate location.
So this way the clipping region always stays in the same place and is only ever drawn once. The image is moved on the clipping-canvas and then drawn to look correct, and then the in-memory canvas is drawn to your main canvas. It should be much faster that way, as calls to drawImage are far faster than creating and drawing paths.
As a separate performance consideration, don't call save and restore unless you have to. They do take time and they are unnecessary in your loop above.
If your code is open-source, let me know and I'll take a look at it performance-wise in general if you want.
Why not have one canvas for the foreground and one canvas for the background? Like the following demo
Foreground/Background Demo (I may of went a little overboard making the demo :? I love messing with JS/canvas.
But basically the foreground canvas is transparent besides the content, so it acts like a mask over the background canvas.
It looks like it is now possible with a new path2D object.
The new Path2D API (available from Firefox 31+) lets you store paths, which simplifies your canvas drawing code and makes it run faster. The constructor provides three ways to create a Path2D object:
new Path2D(); // empty path object
new Path2D(path); // copy from another path
new Path2D(d); // path from from SVG path data
The third version, which takes SVG path data to construct, is especially handy. You can now re-use your SVG paths to draw the same shapes directly on a canvas as well:
var p = new Path2D("M10 10 h 80 v 80 h -80 Z");
Information is taken from Mozilla official site.