Canvas Performance
Lately i'm creating alot of animations in canvas, and as canvas has no events you need to create your own eventsystem based on the coordinates, in short you need a collisiondetection function.As most of the codes are very long i rewrote my own and doing so i undersood how simple it is. So i wrote some sort of a game code.
Basically canvas games are alot of temp arrays of numbers, where in most cases a number between 0 and 64,128 or 256 would be enough. Maybe reduced and used as a multipler
hitpoints=max*(number/256);
So i was thinking What if i store these values into an arraybuffer?
var array = new Float64Array(8);
array[0] = 10;
//....
Example:(i do it that way... if you know something better feel free to tell me)
//Usable ammonition
var ammo=[
['missiles',dmg,range,radius,red,green,blue,duration],
//all appart the first are numbers between 0 and 255
['projectile',dmg,range,radius,red,green,blue,duration]
]
//Flying ammonition
//this is created inside the canvas animation everytime you shoot.
var tempAmmo=[
[id,timestarted,x,y,timepassed] //these are all integers.
// the longest is probably the timestamp
]
// i could add the ammo numbers to the tempAmmo (which is faster):
[id,timestarted,x,y,timepassed,dmg,range,radius,red,green,blue,duration]
// then i do the same double array for enemies and flyingEnemies.
wouldn't it be better to store everything in arrabuffers?
What i think (correct me if i'm wrong):
arraybuffers are binary data,
they should be faster for the rendering, they should be smaller
for the memory.
Now if this two opinions are correct how do i properly create an array structure like the described one, maybe choosing the proper array Type?
Note: in my case i'm using a bidimensional array.And obiovsly i don't want to use objects.
Related
A few months ago I made a small terrain generator, like Minecraft, for a school project.
The way I did this was by using multiple chunks. Each chunk contained a 3-dimensional array that stored the blocks.
Every position in this array corresponded with the position of the block it contained.
blocks[x, y, z] = new Block();
Now I would like to add different sizes if blocks. However, I can't do that with the way I am storing the blocks right now, because bigger blocks would have to be spread over multiple positions in the 3-dimensional array.
An example of a game with different sizes of blocks (and different shapes) is LEGO Worlds. How does a game like this store all these little blocks?
I hope someone can help me with this.
The language I am using is Javascript in combination with WebGL.
Thanks in advance!
In my experience there are a few different ways of tackling an issue like this, but the one I'd recommend would depend on the amount of time you have to work on this and the scope (how big) you wanted to make this game.
Your Current Approach
At the moment I think your using what most people would consider the most straightforward approach by storing the voxels in a 3D grid
[Source].
But two problems you seem to be having is that there isn't an obvious way to create blocks that are bigger then 1x1 and that a 3D grid for a world space is fairly inefficient in terms of memory usage (As for an array you have to have memory allocated for every cell, including empty space. JavaScript is no different).
An Alternative Approach
An alternative to using a 3D array would be to instead use a different data structure, the full name being a sparse voxel octree.
This to put it simply is a tree data structure that works by subdividing an area of space until everything has been stored.
The 2D form of this where a square sub divides into four smaller quadrants is called a quad tree and likewise a 3D equivalent divides into eight quadrants, called an octree. This approach is generally preferable when possible as its much more efficient because the trees only occupy more memory when its absolutely essential and they can also be packed into a 1D array (Technically a 3D array can be too).
A common tactic used with quad/octrees in some block based games is to take a region of the same kind of voxel that fit into one larger quadrant of the tree is to simply stop sub division there, as there's no reason to go deeper if all the data is the same.
The other optimization they can make is called sparse where regions of empty space (air) are simply deleted since empty space doesn't do anything special and its location can be inferred.
[SVO Source]
[Z Order Curve Source]
Recommended Approach
Unless you have a few months to complete your game and you're at university I seriously wouldn't recommend an SVO (Though reading up about could impress any teachers you have). Instead I'd recommend taking the same approach that Minecraft appears to visibly has. E.G. A door is 1X2 but blocks can only be 1x1, then just make it two blocks.
In the example of a door you would have four unique blocks in total, two for the upper and lower half, and two variations of each being opened or closed.
E.G.
var cubeProgram; // shader program
var cubeVBO; // vertex buffer (I recommend combining vertex & UV coords)
var gl; // rendering context
// Preset list of block ID's
var BLOCK_TYPES = {
DOOR_LOWER_OPEN: 0,
DOOR_UPPER_OPEN: 1,
DOOR_LOWER_CLOSED: 2,
DOOR_UPPER_CLOSED: 3,
}
var BLOCK_MESHES = {
GENERIC_VBO: null,
DOOR_UPPER_VBO: null
DOOR_LOWER_VBO: null
}
// Declare a Door class using ES6 syntax
class Door {
// Assume X & Y are the lower half of the door
constructor(x,y,map) {
if (y - 1 > -1) {
console.error("Error: Top half of the door goes outside the map");
return;
}
this.x = x;
this.y = y;
map[x][y ] = BLOCK_TYPES.DOOR_LOWER_OPEN;
map[x][y-1] = BLOCK_TYPES.DOOR_UPPER_OPEN;
}
}
Given is a big (but not huge) array of strings (in numbers 1000-5000 single strings). I want to perform some calculations and other stuff on these strings. Because it always stopped working when dealing with that one big array, I rewrote my function to recursively fetch smaller chunks (currently 50 elements) - I did this using splice because I thought it would be a nice idea to reduce the size of the big array step by step.
After implementing the "chunk"-version, I'm now able to calculate up to about 2000 string-elements (above that my laptop is becoming extremely slow and crashing after a while).
The question: why is it still crashing, even though I'm not processing that huge array but just small chunks successively?
Thanks in advance.
var file = {some-array} // the array of lines
var portionSize = 50; // the size of the chunks
// this function is called recursively
function convertStart(i) {
var size = file.length;
chunk = file.splice(0,portionSize);
portionConvert(chunk,i);
}
// this function is used for calculating things with the strings
function portionConvert(chunk,istart) {
for(var i=0;i<portionSize;i++) {
// doing some string calculation with the smaller chunk here
}
istart += 1;
convertStart(istart); // recall the function with the next chunk
}
From my experience the amount of recursion you're doing can "exceed the stack," unless you narrow down the input values, which is why you were able to do more with less. Keep in mind that for every new function call, the state of the function at the call site is saved in your RAM. If you have a computer with little RAM it's going to get clogged up.
If you're a having a processing problem you should switch to a loop version. Loops don't progressively save the state of the function, just the values. Typically, I would leave recursion for smaller jobs like processing a tree-like/object structures or parsing expressions; some situation where it requires processing to "intuitively go deeper" on something. In the case where you just have one long array, I would just process each of the elements with a forEach, which is a for-loop in a handy wrapper:
file.forEach(function(arrayElement) {
// doing some string calculation with the chunk (arrayElement) here
});
Take a look at forEach here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach
I am creating a video game based on Node.js/WebGL/Canvas/PIXI.js.
In this game, blocks have a generic size: they can be circles, polygons, or everything. So, my physical engine needs to know where exactly the things are, what pixels are walls and what pixels are not. Since I think PIXI don't allow this, I create an invisible canvas where I put all the wall's images of the map. Then, I use the function getImageData to create a function "isWall" at (x, y):
function isWall(x, y):
return canvas.getImageData(x, y, 1, 1).data[3] != 0;
However, this is very slow (it takes up to 70% of the CPU time of the game, according to Chrome profiling). Also, since I introduced this function, I sometimes got the error "Oops, WebGL crashed" without any additional advice.
Is there a better method to access the value of the pixel? I thought about storing everything in a static bit array (walls have a fixed size), with 1 corresponding to a wall and 0 to a non-wall. Is it reasonable to have a 10-million-cells array in memory?
Some thoughts:
For first check: Use collision regions for all of your objects. The regions can even be defined for each side depending on shape (ie. complex shapes). Only check for collisions inside intersecting regions.
Use half resolution for hit-test bitmaps (or even 25% if your scenario allow). Our brains are not capable of detecting pixel-accurate collisions when things are moving so this can be taken advantage of.
For complex shapes, pre-store the whole bitmap for it (based on its region(s)) but transform it to a single value typed array like Uint8Array with high and low values (re-use this instead of getting one and one pixels via the context). Subtract object's position and use the result as a delta for your shape region, then hit-testing the "bitmap". If the shape rotates, transform incoming check points accordingly (there is probably a sweet-spot here where updating bitmap becomes faster than transforming a bunch of points etc. You need to test for your scenario).
For close-to-square shaped objects do a compromise and use a simple rectangle check
For circles and ellipses use un-squared values to check distances for radius.
In some cases you can perhaps use collision predictions which you calculate before the games starts and when knowing all objects positions, directions and velocities (calculate the complete motion path, find intersections for those paths, calculate time/distance to those intersections). If your objects change direction etc. due to other events during their path, this will of course not work so well (or try and see if re-calculating is beneficial or not).
I'm sure why you would need 10m stored in memory, it's doable though - but you will need to use something like a quad-tree and split the array up, so it becomes efficient to look up a pixel state. IMO you will only need to store "bits" for the complex shapes, and you can limit it further by defining multiple regions per shape. For simpler shapes just use vectors (rectangles, radius/distance). Do performance tests often to find the right balance.
In any case - these sort of things has to be hand-optimized for the very scenario, so this is just a general take on it. Other factors will affect the approach such as high velocities, rotation, reflection etc. and it will quickly become very broad. Hope this gives some input though.
I use bit arrays to store 0 || 1 info and it works very well.
The information is stored compactly and gets/sets are very fast.
Here is the bit library I use:
https://github.com/drslump/Bits-js/blob/master/lib/Bits.js
I've not tried with 10m bits so you'll have to try it on your own dataset.
The solution you propose is very "flat", meaning each pixel must have a corresponding bit. This results in a large amount of memory being required--even if information is stored as bits.
An alternative testing data ranges instead of testing each pixel:
If the number of wall pixels is small versus the total number of pixels you might try storing each wall as a series of "runs". For example, a wall run might be stored in an object like this (warning: untested code!):
// an object containing all horizontal wall runs
var xRuns={}
// an object containing all vertical wall runs
var yRuns={}
// define a wall that runs on y=50 from x=100 to x=185
// and then runs on x=185 from y=50 to y=225
var y=50;
var x=185;
if(!xRuns[y]){ xRuns[y]=[]; }
xRuns[y].push({start:100,end:185});
if(!yRuns[x]){ yRuns[x]=[]; }
yRuns[x].push({start:50,end:225});
Then you can quickly test an [x,y] against the wall runs like this (warning untested code!):
function isWall(x,y){
if(xRuns[y]){
var a=xRuns[y];
var i=a.length;
do while(i--){
var run=a[i];
if(x>=run.start && x<=run.end){return(true);}
}
}
if(yRuns[x]){
var a=yRuns[x];
var i=a.length;
do while(i--){
var run=a[i];
if(y>=run.start && y<=run.end){return(true);}
}
}
return(false);
}
This should require very few tests because the x & y exactly specify which array of xRuns and yRuns need to be tested.
It may (or may not) be faster than testing the "flat" model because there is overhead getting to the specified element of the flat model. You'd have to perf test using both methods.
The wall-run method would likely require much less memory.
Hope this helps...Keep in mind the wall-run alternative is just off the top of my head and probably requires tweaking ;-)
Code optimizing is said here in SO that profiling is the first step for optimizing javascript and the suggested engines are profilers of Chrome and Firefox. The problem with those is that they tell in some weird way the time that each function is executed, but I haven't got any good understanding of them. The most helpful way would be that the profiler would tell, how many times each row is executed and if ever possible also the time that is spent on each row. This way would it be possible to see the bottlenecks strictly. But before such tool is implemented/found, we have two options:
1) make own calculator which counts both the time and how many times certain code block or row is executed
2) learn to understand which are slow methods and which are not
For option 2 jsperf.com is of great help. I have tried to learn optimizing arrays and made a speed test in JSPERF.COM. The following image shows the results in 5 main browsers and found some bottlenecks that I didn't know earlier.
The main findings were:
1) Assigning values to arrays is significantly slower than assigning to normal variables despite of which method is used for assigning.
2) Preinitializing and/or prefilling array before performance critical loops can improve speed significantly
3) Math trigonometric functions are not so slow when compared to pushing values into arrays(!)
Here are the explanations of every test:
1. non_array (100%):
The variables were given a predefined value this way:
var non_array_0=0;
var non_array_1=0;
var non_array_2=0;
...
and in timed region they were called this way:
non_array_0=0;
non_array_1=1;
non_array_2=2;
non_array_3=3;
non_array_4=4;
non_array_5=5;
non_array_6=6;
non_array_7=7;
non_array_8=8;
non_array_9=9;
The above is an array-like variable, but there seems to be no way to iterate or refer to those variables in other way as oppocite to array. Or is there?
Nothing in this test is faster than assigning a number to variable.
2. non_array_non_pre (83.78%)
Exactly the same as test 1, but the variables were not pre-initialized nor prefilled. The speed is 83,78% of the speed of test 1. In every tested browser the speed of prefilled variables was faster than non-prefilled. So initialize (and possibly prefill) variables outside any speed critical loops.
The test code is here:
var non_array_non_pre_0=0;
var non_array_non_pre_1=0;
var non_array_non_pre_2=0;
var non_array_non_pre_3=0;
var non_array_non_pre_4=0;
var non_array_non_pre_5=0;
var non_array_non_pre_6=0;
var non_array_non_pre_7=0;
var non_array_non_pre_8=0;
var non_array_non_pre_9=0;
3. pre_filled_array (19.96 %):
Arrays are evil! When we throw away normal variables (test1 and test2) and take arrays in to the picture, the speed decreases significantly. Although we make all optimizations (preinitialize and prefill arrays) and then assign values directly without looping or pushing, the speed decreases to 19.96 percent. This is very sad and I really don't understand why this occurs. This was one of the main shocks to me in this test. Arrays are so important, and I have not find a way to make many things without arrays.
The test data is here:
pre_filled_array[0]=0;
pre_filled_array[1]=1;
pre_filled_array[2]=2;
pre_filled_array[3]=3;
pre_filled_array[4]=4;
pre_filled_array[5]=5;
pre_filled_array[6]=6;
pre_filled_array[7]=7;
pre_filled_array[8]=8;
pre_filled_array[9]=9;
4. non_pre_filled_array (8.34%):
This is the same test as 3, but the array members are not preinitialized nor prefilled, only optimization was to initialize the array beforehand: var non_pre_filled_array=[];
The speed decreases 58,23 % compared to preinitilized test 3. So preinitializing and/or prefilling array over doubles the speed.
The test code is here:
non_pre_filled_array[0]=0;
non_pre_filled_array[1]=1;
non_pre_filled_array[2]=2;
non_pre_filled_array[3]=3;
non_pre_filled_array[4]=4;
non_pre_filled_array[5]=5;
non_pre_filled_array[6]=6;
non_pre_filled_array[7]=7;
non_pre_filled_array[8]=8;
non_pre_filled_array[9]=9;
5. pre_filled_array[i] (7.10%):
Then to the loops. Fastest looping method in this test. The array was preinitialized and prefilled.
The speed drop compared to inline version (test 3) is 64,44 %. This is so remarkable difference that I would say, do not loop if not needed. If array size is small (don't know how small, it have to be tested separately), using inline assignments instead of looping are wiser.
And because the speed drop is so huge and we really need loops, it's is wise to find better looping method (eg. while(i--)).
The test code is here:
for(var i=0;i<10;i++)
{
pre_filled_array[i]=i;
}
6. non_pre_filled_array[i] (5.26%):
If we do not preinitialize and prefill array, the speed decreases 25,96 %. Again, preinitializing and/or prefilling before speed critical loops is wise.
The code is here:
for(var i=0;i<10;i++)
{
non_pre_filled_array[i]=i;
}
7. Math calculations (1.17%):
Every test have to be some reference point. Mathematical functions are considered slow. The test consisted of ten "heavy" Math calculations, but now comes the other thing that struck me in this test. Look at speed of 8 and 9 where we push ten integer numbers to array in loop. Calculating these 10 Math functions is more than 30% faster than pushing ten integers into array in loop. So, may be it's easier to convert some array pushes to preinitialized non-arrays and keep those trigonometrics. Of course if there are hundred or thousands of calculations per frame, it's wise to use eg. sqrt instead of sin/cos/tan and use taxicab distances for distance comparisons and diamond angles (t-radians) for angle comparisons, but still the main bottleneck can be elsewhere: looping is slower than inlining, pushing is slower than using direct assignment with preinitilization and/or prefilling, code logic, drawing algorithms and DOM access can be slow. All cannot be optimized in Javascript (we have to see something on the screen!) but all easy and significant we can do, is wise to do. Someone here in SO has said that code is for humans and readable code is more essential than fast code, because maintenance cost is the biggest cost. This is economical viewpoint, but I have found that code optimizing can get the both: elegance and readability and the performance. And if 5% performance boost is achieved and the code is more straightforwad, it gives a good feeling!
The code is here:
non_array_0=Math.sqrt(10435.4557);
non_array_1=Math.atan2(12345,24869);
non_array_2=Math.sin(35.345262356547);
non_array_3=Math.cos(232.43575432);
non_array_4=Math.tan(325);
non_array_5=Math.asin(3459.35498534536);
non_array_6=Math.acos(3452.35);
non_array_7=Math.atan(34.346);
non_array_8=Math.pow(234,222);
non_array_9=9374.34524/342734.255;
8. pre_filled_array.push(i) (0.8%):
Push is evil! Push combined to loop is baleful evil! This is for some reason very slow method to assign values into array. Test 5 (direct assignments in loop), is nearly 9 times faster than this method and both methods does exactly the same thing: assign integer 0-9 into preinitialized and prefilled array. I have not tested if this push-for-loop evilness is due to pushing or looping or the combination of both or the looping count. There are in JSPERF.COM other examples that gives conflicting results. It's wiser to test just with the actual data and make decisions. This test may not be compatible with other data than what was used.
And here is the code:
for(var i=0;i<10;i++)
{
pre_filled_array.push(i);
}
9. non_pre_filled_array.push(i) (0.74%):
The last and slowest method in this test is the same as test 8, but the array is not prefilled. A little slower than 9, but the difference is not significant (7.23%). But let's take an example and compare this slowest method to the fastest. The speed of this method is 0.74% of the speed of the method 1, which means that method 1 is 135 times faster than this. So think carefully, if arrays are at all needed in particular use case. If it is only one or few pushes, the total speed difference is not noticeable, but on the other hand if there are only few pushes, they are very simple and elegant to convert to non-array variables.
This is the code:
for(var i=0;i<10;i++)
{
non_pre_filled_array.push(i);
}
And finally the obligatory SO question:
Because the speed difference according to this test seems to be so huge between non-array-variable- assignments and array-assignments, is there any method to get the speed of non-array-variable-assigments and the dynamics of arrays?
I cannot use var variable_$i = 1 in a loop so that $i is converted to some integer. I have to use var variable[i] = 1 which is significantly slower than var variable1 = 1 as the test proved. This may be critical only when there are large arrays and in many cases they are.
EDIT:
I made a new test to confirm the slowness of arrays access and tried to find faster way:
http://jsperf.com/read-write-array-vs-variable
Array-read and/or array-write are significantly slower than using normal variables. If some operations are done to array members, it's wiser to store the array member value to a temp variable, make those operations to temp variable and finally store the value into the array member. And although code becomes larger, it's significantly faster to make those operations inline than in loop.
Conclusion: arrays vs normal variables are analogous to disk vs memory. Usually memory access is faster than disk access and normal variables access is faster than array access. And may be concatenating operations is also faster than using intermediate variables, but this makes code a little non readable.
Assigning values to arrays is significantly slower than assigning to normal variables. Arrays are evil! This is very sad and I really don't understand why this occurs. Arrays are so important!
That's because normal variables are statically scoped and can be (and are) easily optimised. The compiler/interpreter will learn their type, and might even avoid repeated assignments of the same value.
These kind of optimisations will be done for arrays as well, but they're not so easy and will need longer to take effect. There is additional overhead when resolving the property reference, and since JavaScript arrays are auto-growing lists the length needs to be checked as well.
Prepopulating the arrays will help to avoid reallocations for capacity changes, but for your little arrays (length=10) it shouldn't make much difference.
Is there any method to get the speed of non-array-variable-assigments and the dynamics of arrays?
No. Dynamics do cost, but they are worth it - as are loops.
You hardly ever will be in the case to need such a micro-optimisation, don't try it. The only thing I can think of are fixed-sized loops (n <= 4) when dealing with ImageData, there inlining is applicable.
Push is evil!
Nope, only your test was flawed. The jsperf snippets are executed in a timed loop without tearup and -down, and only there you have been resetting the size. Your repeated pushes have been producing arrays with lengths of hundredth thousands, with correspondent need of memory (re-)allocations. See the console at http://jsperf.com/pre-filled-array/11.
Actually push is just as fast as property assignment. Good measurements are rare, but those that are done properly show varying results across different browser engine versions - changing rapidly and unexpected. See How to append something to an array?, Why is array.push sometimes faster than array[n] = value? and Is there a reason JavaScript developers don't use Array.push()? - the conclusion is that you should use what is most readable / appropriate for your use case, not what you think could be faster.
I have an multi dimensional array as
[
{"EventDate":"20110421221932","LONGITUDE":"-75.61481666666670","LATITUDE":"38.35916666666670","BothConnectionsDown":false},
{"EventDate":"20110421222228","LONGITUDE":"-75.61456666666670","LATITUDE":"38.35946666666670","BothConnectionsDown":false}
]
Is there any plugin available to search for combination of LONGITUDE,LATITUDE?
Thanks in advance
for (var i in VehCommLost) {
var item = VehCommLost[i];
if (item.LONGITUDE == 1 && item.LATITUDE == 2) {
//gotcha
break;
}
}
this is json string..which programming language u r using with js??
by the way try with parseJSON
Are the latitudes and longitudes completely random? or are they points along a path, so there is some notion of sequence?
If there is some ordering of the points in the array, perhaps a search algorithm could be faster.
For example:
if the inner array is up to 10,000 elements, test item 5000
if that value is too high, focus on 1-4999;
if too low, focus on 5001-10000, else 5000 is the right anwser
repeat until the range shrinks to the vicinity, making a straight loop through the remaining values quick enough.
After sleeping on it, it seems to me most likely that the solution to your problem lies in recasting the problem.
Since it is a requirement of the system that you are able to find a point quickly, I'd suggest that a large array is the wrong data structure to support that requirement. It maybe necessary to have an array, but perhaps there could also be another mechanism to make the search rapid.
As I understand it you are trying to locate points near a known lat-long.
What if, in addition to the array, you had a hash keyed on lat-long, with the value being an array of indexes into the huge array?
Latitude and Longitude can be expressed at different degrees of precision, such as 141.438754 or 141.4
The precision relates to the size of the grid square.
With some knowledge of the business domain, it should be possible to select a reasonably-sized grid such that several points fit inside but not too many to search.
So the hash is keyed on lat-long coords such as '141.4#23.2' with the value being a smaller array of indexes [3456,3478,4579,6344] using the indexes we can easily access the items in the large array.
Suppose we need to find 141.438754#23.2i7643 : we can reduce the precision to '141.4#23.2' and see if there is an array for that grid square.
If not, widen the search to the (3*3-1=) 8 adjacent grids (plus or minus one unit).
If not, widen to the (=5*5-9) 16 grid squares one unit away. And so on...
Depending on how the original data is stored and processed, it may be possible to generate the hash server-side, which would be preferable. If you needed to generate the hash client-side, it might be worth doing if you reused it for many searches, but would be kind of pointless if you used the data only once.
Could you comment on the possibility of recasting the problem in a different way, perhaps along these lines?