Is javascript not fast enough for doing fluid simulation? - javascript

I am currently trying to implement a small fluid simulation on P5js. I tried to render 20K squares with a random colour. I got a frame rate of 2.xxx.
var sim;
var xdim = 200; var xLength;
var ydim = 100; var yLength;
function setup() {
createCanvas(800,400);
sim = new Sim(xdim, ydim);
}
function draw() {
xLength = width/xdim;
yLength = height/ydim;
for (var i = 0; i < xdim; ++i) for (var j = 0; j < ydim; ++j) {
fill(100);
rect(i*xLength, j*yLength, xLength, yLength);
}
console.log(frameRate());
}
What is the problem behind? Is the library not good enough? Or, the poor configuration of my computer? Or, javascript is not suitable for these kinds of implementation?

We can't help debug your code without an MCVE. Specifically, you haven't provided the Sim class, so we can't run your code at all.
But you need to take a step back and ask yourself this: what performance do you expect? You can't really complain about performance if you didn't have any expectations going in.
Also, you might want to figure out how many squares you can display before seeing a performance hit.
From there it's a game of looking for optimizations. You're going to have to do some profiling to understand exactly where your performance hit is. Maybe you display fewer squares, or maybe you lower the framerate, or maybe you do some pre-rendering. Again, what you do depends on exactly what your expectations and goals are.
I will say that you should take that call to console.log() out of your draw() loop. You should only use that for debugging, and it's not going to improve your performance to call that every single frame.

Related

Is it possible to check whether a browser is fast by JavaScript in a simple way?

I am building a one-page app.
I have some visual features that can be on or off.
A simple way is to ask the user whether they would like to see better effects.
However, is it possible for me to detect it from JavaScript rather than asking them?
For instance:
var iterations = 100000000;
for(var i = 0; i < iterations; i++ ){
// do nothing
};
var time_took = console.timeEnd();
if(time_took < 150) {
// good device
launchEffects();
}
Is this way above reliable? Any good way to achieve this? Just approximate performance check is good, I don't want to import huge 3rd-party library.

Best way to share out many points on a few features?

I have 5000+ LatLng points, and for each of them I would like to find out which feature (region) they belong to. The features come from a kmz layer by Philippe Ivaldi, converted to GeoJSON.
Currently, I am doing this with turfjs in a double for loop. As expected, the calculation freezes the browser for ten minutes, which ain't very convenient.
Here's my code :
function countCeaByLayer(geoJsonLayer){
jQuery.getJSON('http://localhost/server/retrieveData.php', function(data){
var turfPoints = [];
for(var i = 0; i < data.length; i++){
turfPoints.push(turf.point([data[i].longitudeWGS84, data[i].latitudeWGS84]));
}
var features = geoJsonLayer.toGeoJSON().features;
for(var i = 0; i < features.length; i++){
var turfPointsNew = [];
for(var j = 0; j < turfPoints.length; j++){
var isInside = turf.inside(turfPoints[j], features[i]);
if(!isInside) turfPointsNew.push(turfPoints[j]);
}
turfPoints = turfPointsNew;
}
console.log("done");
});
}
What can I do to avoid freezing the browser?
Make it async?
Do the calculation with node and turfjs on a server?
Or deploy leafletjs on a server with node and leaflet-headless?
...or should I just deal with it?
Thanks!
To optimize your code, you should do something like this.
Loop over the points.
For each point, when you iterate over polygons to know if the point is inside one of them, first get the polygon Bounds and see if the point is within the bounds.
If not, you can skip going further and go to the next polygons.
If it's within the bounds, go for a plain check if it is inside the polygon itself.
If it's the case, break the loop iterating over polygons and switch to the next point.
For example, it could be:
points.forEach(function(point) {
polygons.some(function(polygon) {
if (polygon.getBounds().contains(point)) { // or other method if you are not playing with Leaflet features
if (turf.isInside(polygon, point) { // for example, not sure this method actually exists but you get the concept
// point is within the polygon, do tuff
return true; // break the some loop
}
}
});
});
I've myself developped something that exactly does the same thing also based on turf, I run it on the client side (and my loops are made with .some, not classical for loop, so it could even go further in terms of performance) and I never experienced freeze.
From my point of view, 5000 points is peanut for browser to handle, but if your polygons are really complex (dozen of hundreds of thousands of vertices), this can slow down the process of course.
Br,
Vincent
If Stranded Kid's answer is overkill for you,
geoJsonLayer.eachLayer(function(layer){
var within = turf.within(turf.featureCollection(turfPoints),turf.featureCollection([layer.toGeoJSON()]));
console.dir(within);
});
And make sure your coordinates are floats and not strings, because that's what caused the slowdown for me.

After Effects scripting set startTime, unpredictable results. Bug in aftereffects or am i doing it wrong?

I have a very strange problem while scripting in aftereffects, i wrote a few scripts to perform common tasks and in one of the functions in my latest script i am trying to set the start time of a layer to 0 so the layer will start at the beginning of the composition (so i can perform some more complex tasks in the script).
For this i am using the following code:
var theSelectedLayers = app.project.activeItem.selectedLayers;
var arrayLength = theSelectedLayers.length;
for (var i = 0; i < arrayLength; i++) {
var workingLayer = theSelectedLayers[i];
workingLayer.startTime = 0;
}
This works fine in 90% of the compositions i work in but for some strange reason in some compositions the layers are not moved to the start time but they are moved forward in the time-line (the amount varies).
There is no obvious reason for this, the compositions i have used to test this script are near identical. Am i doing something wrong or is there a better way to solve this problem?
I will be grateful for any help! I am really stuck here.

How to manage arguments

I apologise in advance if I'm too bad at using the search engine and this has already been answered. Please point me in the right direction in that case.
I've recently begun to use the arguments variable in functions, and now I need to slice it. Everywhere I look people are doing things like:
function getArguments(args, start) {
return Array.prototype.slice.call(args, start);
}
And according to MDN this is bad for performance:
You should not slice on arguments because it prevents optimizations in JavaScript engines (V8 for example).
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/arguments
Is there a reason why I don't see anyone doing things like this:
function getArguments(args, start) {
var i, p = 0;
var len = args.length;
var params = [];
for (i = start; i < len; ++i) {
params[p] = args[i];
p += 1;
}
return params;
}
You get the arguments you want, and no slicing is done. So from my point of view, you don't loose anything on this, well maybe it uses a little extra memory and is slightly slower, but not to the point where it really makes a difference, right?
Just wanted to know if my logic here is flawed.
Here is a discuss
and here is introduction
e.g. here uses the inline slice
It appears from the discussion that #Eason posted, (here) that the debate is in the "microptimization" category, ie: most of us will never hit those performance bumps because our code isn't being run through the kind of iterations needed to even appear on the radar.
Here's a good quote that sums it up:
Micro-optimizations like this are always going to be a trade-off
between the code's complexity/readability and its performance.
In many cases, the complexity/readability is more important. In this case, the
very slowest method that was tested netted a runtime of 4.3
microseconds. If you're writing a webservice and you're slicing args
two times per request and then doing 100 ms worth of other work, an
extra 0.0086 ms will not be noticeable and it's not worth the time or
the code pollution to optimize.
These optimizations are most helpful in really hot loops that you're hitting a gajillionty times. Use a
profiler to find your hot code, and optimize your hottest code first,
until the performance you've achieved is satisfactory.
I'm satisfied, and will use Array.prototype.slice.call() unless I detect a performance blip that points to that particular piece of code not hitting the V8 optimizer.

Improving rudimentary AI of Angular based chess game

I have created a chess game with Angular and chess.js and am trying to improve its rudimentary AI. The un-improved code currently lives at: https://gist.github.com/dexygen/8a19eba3c58fa6a9d0ff (or https://gist.githubusercontent.com/dexygen/8a19eba3c58fa6a9d0ff/raw/d8ee960cde7d30850c0f00f511619651396f5215/ng-chess)
What the AI currently consists of is checking whether the computer (black) has a move that checkmates (using chess.js' in_checkmate() method), and if so, mating the human (white), otherwise making a random move. To improve this I thought that instead of merely making a random move, I would have the AI check for white's counters to black's responses. Then, if White has checkmate, not including those black responses in the moves to randomly select from.
I would like to improve the AI within makeMove() (which currently merely delegates to makeRandomMove()) but I am finding this to be harder than expected. What I expected to be able to do was, not unlike mateNextMove() (refer to lines 155-168 of the gist), to check for in_checkmate() within a loop, except the loop will be nested to account for black responses and white counters to those responses.
Here is my first attempt at what I expected would work but it does not avoid checkmate when possible.
function makeMove(responses) {
var evaluator = new Chess();
var response;
var allowsMate;
var counters = [];
var candidates = [];
for (var i=0, n=responses.length; i<n; i++) {
response = responses[i];
allowsMate = false;
evaluator.load(chess.fen());
evaluator.move(response);
counters = evaluator.moves();
//console.log(evaluator.ascii());
//console.log(counters);
for (var j=0, k=counters.length; j<k; j++) {
evaluator.move(counters[j]);
if (evaluator.in_checkmate()) {
//console.log('in_checkmate');
allowsMate = true;
break;
}
}
if (!allowsMate) {
candidates.push(response);
}
}
return makeRandomMove(candidates);
}
In order to debug/test taking advantage of a little knowledge helps, specifically attempting an early "Scholar's Mate", see: http://en.wikipedia.org/wiki/Scholar%27s_mate. If Black's random moves make this impractical just start over, the opportunity presents itself as often as not. Qxf7# is the notation for the mating move of Scholars mate both in the wikipedia article and also as returned by chess.moves(). So I've tried to modify the inner for loop as follows:
for (var j=0, k=counters.length; j<k; j++) {
evaluator.move(counters[j]);
if (counters[j] == 'Qxf7#') {
console.log(evaluator.in_checkmate());
}
}
But I've had this return false and allow me to deliver the mate. What am I doing wrong (and who possibly wants to help me on this project)?
It seems to me from the code you posted that you are not undoing the moves you make. When you loop through all possible moves, you make that move, then check for a threat. You should then unmake the move. That is probably why your last test didn't work as well.

Categories

Resources