Avoid cleaning canvas in updateOptions call - javascript

I am working in a signals plot program trying to simulate the 'persistence' feature as available in many oscilloscopes.
I would like to prevent dygraph canvas to clean for every updateOptions call. Instead of that, my plot should be preserved until an explicit call for cleaning. This feature will allow me to check if a signal preserves its phase during a certain amount of time.
I tried to use block_redraw parameter set to false in updateOptions function without no success.
Any ideas?

This isn't really something dygraphs is designed to do. You're asking it to render the full history of its data source, rather than the current state of its data source.
That being said, here's the code that clears the plotting canvas:
DygraphCanvasRenderer.prototype.clear = function() {
this.elementContext.clearRect(0, 0, this.width, this.height);
};
So if you override that, it might do what you want:
DygraphCanvasRenderer.prototype.clear = function() {};
That being said, this is liable to break lots of things (like zooming and panning) in addition to giving you the behavior you want. You can see this if you visit the live random data demo page and copy that snippet into the JS console.
Good luck!

Related

How to get usable canvas from Mapbox GL JS

I'm using Mapbox GL, and trying to get a snapshot of it, and merge the snapshot with another image overlaid for output.
I have a HTMLCanvasElement off screen, and I'm first writing the canvas returned from Map.getCanvas() to it, then writing the second (alpha transparent) canvas over that.
The problem is that, though I clearly see elements onscreen in the Map instance, the result only shows the second image/canvas written, and the rest is blank.
So I export just the map's canvas, and I see it is because the map canvas is blank, although a console.log() shows the image data from it to be a large chunk of information.
Here's my export function:
onExport(annotationCanvas: HTMLCanvasElement) {
const mergeCanvas: HTMLCanvasElement = document.createElement('canvas');
const mapCanvas: HTMLCanvasElement = this.host.map.getCanvas();
const mergeCtx: CanvasRenderingContext2D = mergeCanvas.getContext('2d');
mergeCanvas.height = annotationCanvas.height;
mergeCanvas.width = annotationCanvas.width;
mergeCtx.drawImage(mapCanvas, 0, 0);
mergeCtx.drawImage(annotationCanvas, 0, 0);
const mergedDataURL = mergeCanvas.toDataURL();
const mapDataURL = mapCanvas.toDataURL();
const annotationDataURL = annotationCanvas.toDataURL();
console.log(mapDataURL); // Lots of data
download(mapDataURL, 'map-data-only.png'); // Blank image # 1920x1080
download(mergedDataURL, 'annotation.png'); // Only shows annotation (the second layer/canvas) data
}
Is this a bug, or am I missing something?
UPDATE: I sort of figured out what this is about, and have possible options.
Upon stumbling upon a Mapbox feature request, I learned that if you instantiate your Map with the preserveDrawingBuffer option set to false (the default), you wont be able to get a canvas with usable image data. But setting this option to true degrades performance. But you can't change this setting after a Map is instantiated...
I want the Map to perform the best it possibly can!!!!
So, on this answer I stumbled on, regarding a question about three.js, I learned that if I take the screenshot immediately after rendering, I will get the canvas/data that I need.
I tried just calling this.host.map['_rerender']() right before I capture the canvas, but it still returned blankness.
Then searching around in the source code, I found a function called _requestRenderFrame, that looks like it might be what I need, because I can ask the Map to run a function immediately after the next render cycle. But as I come to find out, for some reason, that function is omitted in the compiled code, whilst present in the source, apparently because it is only in the master, and not part of the release.
So I don't really have a satisfactory solution yet, so please let me know of any insights.
As you mentioned in your updated question the solution is to set preserveDrawingBuffer: true upon Map initialisation.
To answer your updated question I think #jfirebaugh's answer at https://github.com/mapbox/mapbox-gl-js/issues/6448#issuecomment-378307311 sums it up very well:
preserveDrawingBuffer can't be modified on the fly. It has to be set at the time the WebGL context is created, and that can have a negative effect on performance.
It's rumored that you can grab the the canvas data URL immediately after rendering, without needing preserveDrawingBuffer, but I haven't verified that, and I suspect it's not guaranteed by the spec.
So although it might be possible to grab the canvas data URL immediately after rendering, it's not guaranteed by the spec.

.addHitRegion() doesn't work in Chrome

I have the latest Chrome version and I see in specs that it should support .addHitRegion() method, as mentioned on MDN. For some reason I get Uncaught TypeError: context.addHitRegion is not a function error.
My code is as simple as this:
var canvas = document.getElementById('myCanvas');
var context = canvas.getContext('2d');
context.beginPath();
context.rect(10,10,100,100);
context.fill();
context.addHitRegion({'id': 'The First Button', 'cursor': 'pointer'});
How do I fix it?
Go here with your browser: chrome://flags
and then
Set the flag Experimental Web Platform features to true to enable it.
As the other answers states, you can enable this through flags, however: you won't be able to ask your users to do the same. And the support is limited to a few browsers. I would therefor recommend looking to other solutions - I list some here:
A notch better approach is to use Path2D objects. They provide the same flexibility in terms of defining hit shapes. Use these with isPointInPath() which also takes a path object. Store each path in an array which you loop through using the position to test with. Unfortunately though, also this is limited to a few browsers, but you can at least use a poly-fill such as this to fix that to some extend (see notes in the link for limitations).
A better option perhaps in regards to support and availability, and the one requiring a bit more work, is to rebuild each single path you want to test on the context itself, then use as above the isPointInPath() to see if the mouse position is inside that path.
If the shapes are simple such as rectangles or circles, you can do simple mathematical tests which is a performant alternative.
So you need to set the experimental flag here
From The compatibility table at the bottom of the page you linked:
This feature is behind a feature flag. Set the flag
ExperimentalCanvasFeatures to true to enable it.
To turn on experimental canvas features browse to “chrome://flags“, turn on “Enable experimental canvas features” and relaunch.
Unfortunately the hit region feature is now obsolete and doesn't appear to be enable-able. You can use isPointInPath() as an alternative. You'll need to create a path object to be able to pass into that function. Something like:
const rectangle = new Path2D();
ctx.beginPath();
rectangle.rect(10, 10, 100, 100);
ctx.fill(rectangle);
...then to check, you could put it into an event listener:
canvas.addEventListener("mousemove", (e) => {
if (ctx.isPointInPath(rectangle, e.offsetX, e.offsetY)) {
console.log("rectangle is hit");
});

Saving canvas to image via canvas.toDataURL results in black rectangle

Im using Pixi.js and trying to save a frame of the animation to an image. canvas.toDataUrl should work, but all i get is a black rectangle. See live example here
the code I use to extract the image data and set the image is:
var canvas = $('canvas')[0];
var context = canvas.getContext('2d');
$('button').click(function() {
var data = renderer.view.toDataURL("image/png", 1);
//tried var data = canvas.toDataURL();
$('img').attr('src', data);
})
I know this has been answered at least 5 other times on SO but ...
What Kaiido mentioned will work but the real issue is that canvas, when used with WebGL, by default has 2 buffers. The buffer you are drawing to and the buffer being displayed.
When you start drawing into a WebGL canvas, as soon as you exit the current event, for example your requestAnimationFrame callback, the canvas is marked for swapping those 2 buffers. When the browser re-draws the page it does the swap. The buffer that you were drawing to is swapped with the one that was being displayed. You're now drawing to other buffer. That buffer is cleared.
The reason it's cleared instead of just left alone is that whether the browser actually swaps buffers or does something else is up to the browser. For example if antialiasing is on (which is the default) then it doesn't actually do a swap. It does a "resolve". It converts the highres buffer you just drew to a normal res anti-aliased copy into the display buffer.
So, to make it more consistent, regardless of which way the browser does its default it just always clears whatever buffer you're about to draw to. Otherwise you'd have no idea if it had 1 frame old data or 2 frame old data.
Setting preserveDrawingBuffer: true tells the browser "always copy, never swap". In this case it doesn't have to clear the drawing buffer because what's in the drawing buffer is always known. No swapping.
What is the point of all that? The point is, if you want to call toDataURL or gl.readPixels you need to call it IN THE SAME EVENT.
So for example your code could work something like this
var capture = false;
$('button').click(function() {
capture = true;
});
function render() {
renderer.render(...);
if (capture) {
capture = false;
var data = renderer.view.toDataURL("image/png", 1);
$('img').attr('src', data);
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
In that case because you call toDataURL in the same javascript event as you rendered to it you'll get the correct results always regardless of wither or not preserveDrawingBuffer is true or false.
If you're writing app that is not constantly rendering you could also do something like
$('button').click(function() {
// render right now
renderer.render(...);
// capture immediately
var data = renderer.view.toDataURL("image/png", 1);
$('img').attr('src', data);
});
The reason preserveDrawingBuffer is false by default is because swapping is faster than copying so this allows the browser to go as fast as possible.
Also see this answer for one other detail
[NOTE]
While this answer is the accepted one, please do read the one by #gman just below, it does contain a way better way of doing.
Your problem is that you are using webGL context, then you need to set the preserveDrawingBuffer property of the webGL context to true in order to be able to call toDataURL() method.
Or alternatively, you can force pixi to use the 2D context, by using the CanvasRenderer Class

Collision detection using a collision array inside html5 canvas

I am trying to detect if a character and an object inside an image collide. I am using a function that can parse the image and creates a collision array and another function that can detect if there is a collision or not in a specific location. My problem is that the isCollision function is never executed that's my jsfiddle : http://jsfiddle.net/AbdiasSoftware/UNWWq/1/
if (isCollision(character.x, character.y)) {
alert("Collision");
}
Please help me to fix my problem.
Add this in the top of your init() method and it should work:
FieldImg.crossOrigin = '';
As you are loading the image from a different origin CORS kicks in and you need to request cross-origin usage when using getImageData() (or toDataURL()).
See modified fiddle here.
Note: in you final code this is probably not gonna be necessary though as you probably want to include the images in the same domain as the page itself - in these cases you need to remove the cross-origin request unless your server is setup to handle this. Just something to have in mind for later if what worked suddenly don't...
Ok i found it :
You are loading a huge background image, and you draw part of it on a much smaller canvas.
You do visual collision detection using a binary view on the display canvas that you build in the process() function.
The issue comes when you want to test : to compute the pixel position of the player within the binary view, you are using y*width + x, but with the wrong width, that of the background image when it should be that of the view (cw).
function isCollision(x, y) {
return (buffer8[y * cw + x] === 1);
}
.
move right and look in the console with this fiddle :
http://jsfiddle.net/gamealchemist/UNWWq/9/

Most efficient way to throttle continuous JavaScript execution on a web page

I'd like to continuously execute a piece of JavaScript code on a page, spending all available CPU time I can for it, but allowing browser to be functional and responsive at the same time.
If I just run my code continuously, it freezes the browser's UI and browser starts to complain. Right now I pass a zero timeout to setTimeout, which then does a small chunk of work and loops back to setTimeout. This works, but does not seem to utilize all available CPU. Any better ways of doing this you might think of?
Update: To be more specific, the code in question is rendering frames on canvas continuously. The unit of work here is one frame. We aim for the maximum possible frame rate.
Probably what you want is to centralize everything that happens on the page and use requestAnimationFrame to do all your drawing. So basically you would have a function/class that looks something like this (you'll have to forgive some style/syntax errors I'm used to Mootools classes, just take this as an outline)
var Main = function(){
this.queue = [];
this.actions = {};
requestAnimationFrame(this.loop)
}
Main.prototype.loop = function(){
while (this.queue.length){
var action = this.queue.pop();
this.executeAction(e);
}
//do you rendering here
requestAnimationFrame(this.loop);
}
Main.prototype.addToQueue = function(e){
this.queue.push(e);
}
Main.prototype.addAction = function(target, event, callback){
if (this.actions[target] === void 0) this.actions[target] = {};
if (this.actions[target][event] === void 0) this.actions[target][event] = [];
this.actions[target][event].push(callback);
}
Main.prototype.executeAction = function(e){
if (this.actions[e.target]!==void 0 && this.actions[e.target][e.type]!==void 0){
for (var i=0; i<this.actions[e.target][e.type].length; i++){
this.actions[e.target][e.type](e);
}
}
}
So basically you'd use this class to handle everything that happens on the page. Every event handler would be onclick='Main.addToQueue(event)' or however you want to add your events to your page, you just point them to adding the event to the cue, and just use Main.addAction to direct those events to whatever you want them to do. This way every user action gets executed as soon as your canvas is finished redrawing and before it gets redrawn again. So long as your canvas renders at a decent framerate your app should remain responsive.
EDIT: forgot the "this" in requestAnimationFrame(this.loop)
web workers are something to try
https://developer.mozilla.org/en-US/docs/DOM/Using_web_workers
You can tune your performance by changing the amount of work you do per invocation. In your question you say you do a "small chunk of work". Establish a parameter which controls the amount of work being done and try various values.
You might also try to set the timeout before you do the processing. That way the time spent processing should count towards any minimum the browsers set.
One technique I use is to have a counter in my processing loop counting iterations. Then set up an interval of, say one second, in that function, display the counter and clear it to zero. This provides a rough performance value with which to measure the effects of changes you make.
In general this is likely to be very dependent on specific browsers, even versions of browsers. With tunable parameters and performance measurements you could implement a feedback loop to optimize in real-time.
One can use window.postMessage() to overcome the limitation on the minimum amount of time setTimeout enforces. See this article for details. A demo is available here.

Categories

Resources