How to find the width and height of a DisplayObject in EaselJS - javascript

var tf = new Text(letter, font, color);
var tfContainer = new Container();
tfContainer.addChild(tf);
How can I find out what are the dimensions of the 'tfContainer'?
I know I can use tf.getMeasuredWidth() and tf.getMeasuredLineHeight() but I'd rather use more general approach. Besides that doesn't return accurate measurements.

#Akonsu is correct, there is no support for width and height, largely because calculating it is very expensive, especially in vectors and groups with transformations on children. We are considering it, but there is no concrete plans for it yet.
-Lanny (gskinner.com)

there is no such functionality in easel.js. I read somewhere the they were planning to add it but it is not there yet as far as I know.

yes, adding width and height to the DisplayObject is a must. there could be method calculateSize(), and only when you try to get the size, to be called, if the size is invalidated and needs recalculation.

Related

Is it expensive to read properties from window object?

For example, let's say we have two versions of lazyload (see code below). In terms of performance, is versionII better than version I?
const imgs = document.querySelectorAll('img');
window.addEventListener('scroll' , lazyload);
// version I
function lazyload() {
imgs.forEach((img) => {
if (img.offsetTop < window.innerHeight + window.pageYOffset) {
img.src = img.dataset.src;
}
}
}
// version II
function lazyload() {
const innerHeight = window.innerHeight;
const pageYOffset = window.pageYOffset;
imgs.forEach((img) => {
if (img.offsetTop < innerHeight + pageYOffset) {
img.src = img.dataset.src;
}
}
Your specific question:
I'll rephrase your specific question like this:
Is it costly to access window.innerHeight and/or window.pageYOffset?
It can be. According to Paul Irish of the Google Chrome Developer Tooling team:
All of the below properties or methods, when requested/called in JavaScript, will trigger the browser to synchronously calculate the style and layout*. This is also called reflow or layout thrashing, and is common performance bottleneck.
...
window
window
window.scrollX, window.scrollY
window.innerHeight, window.innerWidth
window.getMatchedCSSRules() only forces style
-- What forces layout / reflow (emphasis mine)
At the bottom of that document, Paul indicates the layout reflow will only occur under certain circumstances. The portions below (with my added emphasis) answer your question better and more authoritatively than I could.
Reflow only has a cost if the document has changed and invalidated the style or layout. Typically, this is because the DOM was changed (classes modified, nodes added/removed, even adding a psuedo-class like :focus).
If layout is forced, style must be recalculated first. So forced layout triggers both operations. Their costs are very dependent on the content/situation, but typically both operations are similar in cost.
What should you do about all this? Well, the More on forced layout section below covers everything in more detail, but the
short version is:
for loops that force layout & change the DOM are the worst, avoid them.
Use DevTools Timeline to see where this happens. You may be surprised to see how often your app code and library code hits this.
Batch your writes & reads to the DOM (via FastDOM or a virtual DOM implementation). Read your metrics at the begininng of the frame (very very start of rAF, scroll handler, etc), when the numbers are still identical to the last time layout was done.
Changing the src attribute is probably sufficient to "invalidate the style or layout." (Although I suspect using something like correctly-dimensioned SVG placeholders for lazy-loaded images would mitigate or eliminate the cost of the reflows.)
In short, your "version I" implementation is preferable and has, as far as I can tell, no real disadvantages.
Your general question
As shown above, reading properties from the window object can be expensive. But others are right to point out a couple things:
Optimizing too early or too aggressively can cost you valuable time, energy, and (depending on your solution) maintainability.
The only way to be certain is to test. Try both versions of your code, and carefully analyze your favorite dev tool's output for performance differences.
This one seems better
// version III
function lazyload(){
const dimension = window.innerHeight + window.pageYOffset;
img.array.forEach(img => {
if (img.offsetTop < dimension) {
img.src = img.dataset.src;
}
});
}

How can I draw from ImageData to canvas with zoom in Haxe?

Well, I fill ScreenBuffer:ImageData 480x360 and then want to draw it to the canvas 960x720. The task is to decrease the fillrate; the nowadays pixels are very small and we can make them bigger with some quality loss. I look for the operator with 2D-accelaration. But we can't write directly to js.html.Image and ImageData hasn't link to js.html.Image. I found an example for pure JS:
https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Tutorial/Pixel_manipulation_with_canvas
However, it doesn't want to work in Haxe because there isn't 'zoom' element. And there is some information about restrictions in HTML at copying from one image to another.
Many thanks for answers!
The compiler writes "js.html.Element has no field getContext"
getElementById()'s return type is the generic js.html.Element class. Since in your case, you know you're dealing with a <canvas>, you can safely cast it to the more specific CanvasElement. This then lets you call its getContext() method:
var canvas:CanvasElement = cast js.Browser.document.getElementById('zoom');
var zoomctx = canvas.getContext('2d');

JS Canvas get pixel value very frequently

I am creating a video game based on Node.js/WebGL/Canvas/PIXI.js.
In this game, blocks have a generic size: they can be circles, polygons, or everything. So, my physical engine needs to know where exactly the things are, what pixels are walls and what pixels are not. Since I think PIXI don't allow this, I create an invisible canvas where I put all the wall's images of the map. Then, I use the function getImageData to create a function "isWall" at (x, y):
function isWall(x, y):
return canvas.getImageData(x, y, 1, 1).data[3] != 0;
However, this is very slow (it takes up to 70% of the CPU time of the game, according to Chrome profiling). Also, since I introduced this function, I sometimes got the error "Oops, WebGL crashed" without any additional advice.
Is there a better method to access the value of the pixel? I thought about storing everything in a static bit array (walls have a fixed size), with 1 corresponding to a wall and 0 to a non-wall. Is it reasonable to have a 10-million-cells array in memory?
Some thoughts:
For first check: Use collision regions for all of your objects. The regions can even be defined for each side depending on shape (ie. complex shapes). Only check for collisions inside intersecting regions.
Use half resolution for hit-test bitmaps (or even 25% if your scenario allow). Our brains are not capable of detecting pixel-accurate collisions when things are moving so this can be taken advantage of.
For complex shapes, pre-store the whole bitmap for it (based on its region(s)) but transform it to a single value typed array like Uint8Array with high and low values (re-use this instead of getting one and one pixels via the context). Subtract object's position and use the result as a delta for your shape region, then hit-testing the "bitmap". If the shape rotates, transform incoming check points accordingly (there is probably a sweet-spot here where updating bitmap becomes faster than transforming a bunch of points etc. You need to test for your scenario).
For close-to-square shaped objects do a compromise and use a simple rectangle check
For circles and ellipses use un-squared values to check distances for radius.
In some cases you can perhaps use collision predictions which you calculate before the games starts and when knowing all objects positions, directions and velocities (calculate the complete motion path, find intersections for those paths, calculate time/distance to those intersections). If your objects change direction etc. due to other events during their path, this will of course not work so well (or try and see if re-calculating is beneficial or not).
I'm sure why you would need 10m stored in memory, it's doable though - but you will need to use something like a quad-tree and split the array up, so it becomes efficient to look up a pixel state. IMO you will only need to store "bits" for the complex shapes, and you can limit it further by defining multiple regions per shape. For simpler shapes just use vectors (rectangles, radius/distance). Do performance tests often to find the right balance.
In any case - these sort of things has to be hand-optimized for the very scenario, so this is just a general take on it. Other factors will affect the approach such as high velocities, rotation, reflection etc. and it will quickly become very broad. Hope this gives some input though.
I use bit arrays to store 0 || 1 info and it works very well.
The information is stored compactly and gets/sets are very fast.
Here is the bit library I use:
https://github.com/drslump/Bits-js/blob/master/lib/Bits.js
I've not tried with 10m bits so you'll have to try it on your own dataset.
The solution you propose is very "flat", meaning each pixel must have a corresponding bit. This results in a large amount of memory being required--even if information is stored as bits.
An alternative testing data ranges instead of testing each pixel:
If the number of wall pixels is small versus the total number of pixels you might try storing each wall as a series of "runs". For example, a wall run might be stored in an object like this (warning: untested code!):
// an object containing all horizontal wall runs
var xRuns={}
// an object containing all vertical wall runs
var yRuns={}
// define a wall that runs on y=50 from x=100 to x=185
// and then runs on x=185 from y=50 to y=225
var y=50;
var x=185;
if(!xRuns[y]){ xRuns[y]=[]; }
xRuns[y].push({start:100,end:185});
if(!yRuns[x]){ yRuns[x]=[]; }
yRuns[x].push({start:50,end:225});
Then you can quickly test an [x,y] against the wall runs like this (warning untested code!):
function isWall(x,y){
if(xRuns[y]){
var a=xRuns[y];
var i=a.length;
do while(i--){
var run=a[i];
if(x>=run.start && x<=run.end){return(true);}
}
}
if(yRuns[x]){
var a=yRuns[x];
var i=a.length;
do while(i--){
var run=a[i];
if(y>=run.start && y<=run.end){return(true);}
}
}
return(false);
}
This should require very few tests because the x & y exactly specify which array of xRuns and yRuns need to be tested.
It may (or may not) be faster than testing the "flat" model because there is overhead getting to the specified element of the flat model. You'd have to perf test using both methods.
The wall-run method would likely require much less memory.
Hope this helps...Keep in mind the wall-run alternative is just off the top of my head and probably requires tweaking ;-)

Inverse of camera.lookAt()

I googled far and wide but I haven't found the solution to what I think to actually be a pretty common situation. Say I have a THREE.PerspectiveCamera initialized to look at a certain point in space:
var camera = new THREE.PerspectiveCamera(45, 2, 0.1, 100);
var target = new THREE.Vector3(1, 2, 3);
camera.lookAt(target);
Now, later on in the code I'd like to be able to find out the coordinates of target by simply querying camera.
I tried what suggested in this question, adapting it to my own scenario:
var vector = new THREE.Vector3();
vector.applyQuaternion(camera.quaternion);
console.log(vector);
But it logs a vector of coordinates (0, 0, 0) instead of the correct coordinates (which, in my example, should be (1, 2, 3)).
Any insights? Cheers.
EDIT:
Ok so I'm going to try to contextualize here, so as to justify why MrTrustworthy's solution is unfortunately not applicable in my scenario. I'm trying to tweak the THREE.OrbitControls library for my purposes, since I noticed that when using it, it overrides whichever position the camera was looking at originally. This has also been reported here. Basically, on line 36 of OrbitControls.js (I'm using the version which can be found here) this.target is initialized to a new THREE.Vector3(); I found out that if I manually set it to equal the same vector I use as argument of camera.lookAt() everything works just fine: I can start panning, orbiting and zooming the scene from the same POV I would see the scene from if I didn't apply the controls. Of course, I cannot hard-code this information into OrbitControls.js because it would require me to change it every time I want to change the initial "lookAt" of my camera; and if I were to follow MrTrustworthy's suggestion I would have to change line 36 of OrbitControls.js to read like this: this.target = object.targetRef (or this.target = object.targetRef || new THREE.Vecotr3()), which is also too "opinionated" (it would always require object to have a targetRef property, whereas I'm trying to stick to using only three.js's existing object properties and methods). Hope this helps get a better understanding of my case. Cheers.
If your only usecase is "I want to be able to access the camera-targets position via the camera object", you could just put a reference into the camera object.
var camera = new THREE.PerspectiveCamera(45, 2, 0.1, 100);
var target = new THREE.Vector3(1, 2, 3);
camera.lookAt(target);
camera.targetRef = target;
//access it
var iNeedThisNow = camera.targetRef;
I figured it out and wrote my solution here. Since the issue affects both THREE.TrackballControls and THREE.OrbitControls, the solution involves applying a slight change to both those files. I wonder if it can be considered a valid change and make its way to rev. 70; I will issue a PR on github just for the sake of it :)
Thanks to all those who pitched in.
well you could put the object in parent, have parent lookat, and have the child object rotated 180 degrees. That's the quick noob solution

Javascript stage scale calculator

I've searched far and wide throughout the web thinking that somebody may have had a similar need, but have come short. I'm needing to create a calculator that will adjust the size of a stage for draggable objects based on a Width and Height field (in feet).
I'm needing to maintain a max width and height that would, ideally, be set in a variable for easy modification. This max width and height would be set in pixels. I would set dimensions of the draggable items on the stage in "data-" attributes, I imagine. I'm not looking to match things up in terms of screen resolutions.
What's the best way to approach this? I'm pretty mediocre at math and have come up short in being able to create the functions necessary for scaling a stage of objects and their container like this.
I'm a skilled jQuery user, so if it makes sense to make use of jQuery in this, that'd be great. Thanks in advance.
There are at least a couple of ways to scale things proportionately. Since you will know the projected (room) dimensions and you should know at least one of the scaled dimensions (assuming you know the width of the stage), you can scale proportionately by objectLengthInFeet / roomWidthInFeet * stageWidthInPixels.
Assuming a stage width of 500 pixels for an example, once you know the room dimensions and the width of the stage:
var stageWidth = 500,
roomWidth = parseFloat($('#width').val(), 10) || 0, // default to 0 if input is empty or not parseable to number
roomHeight = parseFloat($('#height').val(), 10) || 0, // default to 0 if input is empty or not parseable to number
setRoomDimensions = function (e) {
roomWidth = parseFloat($('#width').val(), 10);
roomHeight = parseFloat($('#height').val(), 10);
},
feetToPixels = function feetToPixels(feet) {
var scaled = feet / roomWidth * stageWidth;
return scaled;
};
Here's a demo: http://jsfiddle.net/uQDnY/

Categories

Resources