A-Star Algorithm: Slow Implementation - javascript

I am working on an implementation of the A-Star algorithm in javascript. It works, however it takes a very large amount of time to create a path between two very close together points: (1,1) to (6,6) it takes several seconds. I would like to know what mistakes I have made in the algorithm and how to resolve these.
My code:
Node.prototype.genNeighbours = function() {
var right = new Node(this.x + 1, this.y);
var left = new Node(this.x - 1, this.y);
var top = new Node(this.x, this.y + 1);
var bottom = new Node(this.x, this.y - 1);
this.neighbours = [right, left, top, bottom];
}
AStar.prototype.getSmallestNode = function(openarr) {
var comp = 0;
for(var i = 0; i < openarr.length; i++) {
if(openarr[i].f < openarr[comp].f) comp = i
}
return comp;
}
AStar.prototype.calculateRoute = function(start, dest, arr){
var open = new Array();
var closed = new Array();
start.g = 0;
start.h = this.manhattanDistance(start.x, dest.x, start.y, dest.y);
start.f = start.h;
start.genNeighbours();
open.push(start);
while(open.length > 0) {
var currentNode = null;
this.getSmallestNode(open);
currentNode = open[0];
if(this.equals(currentNode,dest)) return currentNode;
currentNode.genNeighbours();
var iOfCurr = open.indexOf(currentNode);
open.splice(iOfCurr, 1);
closed.push(currentNode);
for(var i = 0; i < currentNode.neighbours.length; i++) {
var neighbour = currentNode.neighbours[i];
if(neighbour == null) continue;
var newG = currentNode.g + 1;
if(newG < neighbour.g) {
var iOfNeigh = open.indexOf(neighbour);
var iiOfNeigh = closed.indexOf(neighbour);
open.splice(iOfNeigh, 1);
closed.splice(iiOfNeigh,1);
}
if(open.indexOf(neighbour) == -1 && closed.indexOf(neighbour) == -1) {
neighbour.g = newG;
neighbour.h = this.manhattanDistance(neighbour.x, dest.x, neighbour.y, dest.y);
neighbour.f = neighbour.g + neighbour.h;
neighbour.parent = currentNode;
open.push(neighbour);
}
}
}
}
Edit: I've now resolved the problem. It was due to the fact that I was just calling: open.sort(); which wasn't sorting the nodes by their 'f' value. I wrote a custom function and now the algorithm runs quickly.

A few mistakes I've spotted:
Your set of open nodes is not structured in any way so that retrieving the one with the minimal distance is easy. The usual choice for this is to use a priority queue, but inserting new nodes in a sorted order (instead of open.push(neighbour)) should suffice (at first).
in your getSmallestNode function, you may start the loop at index 1
you are calling getSmallestNode(), but not using its results at all. You're only taking currentNode = open[0]; every time (and then even searching for its position to splice it! It's 0!). With the queue, it's just currentNode = open.shift().
However, the most important thing (that could have gone most wrong) is your getNeighbors() function. It does create entirely new node objects every time it is called - ones that were unheard of before, and are not know to your algorithm (or its closed set). They may be in the same position in your grid as other nodes, but they're different objects (which are compared by reference, not by similarity). This means that indexOf will never find those new neighbors in the closed array, and they will get processed over and over (and over). I won't attempt to calculate the complexity of this implementation, but I'd guess its even worse than exponential.
Typically, the A* algorithm is executed on already existing graphs. An OOP-getNeighbors-function would return the references to the existing node objects, instead of creating new ones with the same coordinates. If you need to dynamically generate the graph, you'll need a lookup structure (two-dimensional array?) to store and retrieve already-generated nodes.

Related

Optimize order of objects within layer in Illustrator for reduced laser cutting time

I'm trying to optimize the layer order of paths in Illustrator so that when sent to a laser cutter, the end of one path is close to the start of the next path reducing the travel time of the laser between each cut.
I've come up with the following code, which works, but could be further optimized considering length of lines, or through an annealing process. I'm posting it here in case anyone else is Googling 'Laser cutting optimization' and doesn't want to write their own code. Also if anyone can suggest improvements to the below code, I'd love to hear them.
// For this script to work, all paths to be optimised need to be on layer 0.
// Create a new empty layer in position 1 in the layer heirarchy.
// Run the script, all paths will move from layer 0 to layer 1 in an optimized order.
// Further optimisation possible with 'Annealing', but this will be a good first run optimization.
// Load into Visual Studio Code, follow steps on this website
// https://medium.com/#jtnimoy/illustrator-scripting-in-visual-studio-code-cdcf4b97365d
// to get setup, then run code when linked to Illustrator.
function test() {
if (!app.documents.length) {
alert("You must have a document open.");
return;
}
var docRef = app.activeDocument;
function endToStartDistance(endPath, startPath) {
var endPoint = endPath.pathPoints[endPath.pathPoints.length - 1].anchor;
var startPoint = startPath.pathPoints[0].anchor;
var dx = (endPoint[0] - startPoint[0]);
var dy = (endPoint[1] - startPoint[1]);
var dist = Math.pow((Math.pow(dx, 2) + Math.pow(dy, 2)), 0.5);
return dist;
}
function Optimize(items) {
var lastPath, closest, minDist, delIndex, curItem, tempItems = [];
var topLayer = app.activeDocument.layers[0];
var newLayer = app.activeDocument.layers[1];
for (var x = 1, len = items.length; x < len; x++) {
tempItems.push(items[x]);
}
lastPath = items[0];
lastPath.move(newLayer, ElementPlacement.PLACEATBEGINNING);
while (tempItems.length) {
closest = tempItems[0];
minDist = endToStartDistance(lastPath, closest);
delIndex = 0;
for (var y = 1, len = tempItems.length; y < len; y++) {
curItem = tempItems[y];
if (endToStartDistance(lastPath, curItem) < minDist) {
closest = curItem;
minDist = endToStartDistance(lastPath, closest);
delIndex = y;
}
}
$.writeln(minDist);
//closest.zOrder(ZOrderMethod.BRINGTOFRONT);
closest.move(newLayer, ElementPlacement.PLACEATBEGINNING);
lastPath = closest;
tempItems.splice(delIndex, 1);
}
}
var allPaths = [];
for (var i = 0; i < documents[0].pathItems.length; i++) {
allPaths.push(documents[0].pathItems[i]);
//$.writeln(documents[0].pathItems[i].pathPoints[0].anchor[0])
}
Optimize(allPaths);
}
test();
Version 2 of the above code, some changes include the ability to reverse paths if this results in a reduced distance for the cutting head to move between paths, and added comments to make the code easier to read.
// Create a new empty layer in position 1 in the layer heirarchy.
// Run the script, all paths will move from their current layer to layer 1 in an optimized order.
// Further optimisation possible with 'Annealing', but this will be a good first run optimization.
// Load into Visual Studio Code, follow steps on this website
// https://medium.com/#jtnimoy/illustrator-scripting-in-visual-studio-code-cdcf4b97365d
// to get setup, then run code when linked to Illustrator.aa
function main() {
if (!app.documents.length) {
alert("You must have a document open.");
return;
}
var docRef = app.activeDocument;
// The below function gets the distance between the end of the endPath vector object
// and the start of the startPath vector object.
function endToStartDistance(endPath, startPath) {
var endPoint = endPath.pathPoints[endPath.pathPoints.length - 1].anchor;
var startPoint = startPath.pathPoints[0].anchor;
var dx = (endPoint[0] - startPoint[0]);
var dy = (endPoint[1] - startPoint[1]);
var dist = Math.pow((Math.pow(dx, 2) + Math.pow(dy, 2)), 0.5);
return dist;
}
// The below function gets the distance between the end of the endPath vector object
// and the end of the startPath vector object.
function endToEndDistance(endPath, startPath) {
var endPoint = endPath.pathPoints[endPath.pathPoints.length - 1].anchor;
var startPoint = startPath.pathPoints[startPath.pathPoints.length - 1].anchor;
var dx = (endPoint[0] - startPoint[0]);
var dy = (endPoint[1] - startPoint[1]);
var dist = Math.pow((Math.pow(dx, 2) + Math.pow(dy, 2)), 0.5);
return dist;
}
// The below function iterates over the supplied list of tempItems (path objects) and checks the distance between
// the end of path objects and the start/end of all other path objects, ordering the objects in the layer heirarchy
// so that there is the shortest distance between the end of one path and the start of the next.
// The function can reverse the direciton of a path if this results in a smaller distance to the next object.
function Optimize(tempItems) {
var lastPath, closest, minDist, delIndex, curItem;
var newLayer = app.activeDocument.layers[1]; // There needs to be an empty layer in position 2 in the layer heirarchy
// This is where the path objects are moved as they are sorted.
lastPath = tempItems[0]; // Arbitrarily take the first item in the list of supplied items
tempItems.splice(0, 1); // Remove the first item from the list of items to be iterated over
lastPath.move(newLayer, ElementPlacement.PLACEATBEGINNING); // Move the first item to the first position in the new layer
while (tempItems.length) { // Loop over all supplied items while the length of this array is not 0.
// Items are removed from the list once sorted.
closest = tempItems[0]; // Start by checking the distance to the first item in the list
minDist = Math.min(endToStartDistance(lastPath, closest), endToEndDistance(lastPath, closest));
// Find the smallest of the distances between the end of the previous path item
// and the start / end of this next item.
delIndex = 0; // The delIndex is the index to be removed from the tempItems list after iterating through
// the entire list.
for (var y = 1, len = tempItems.length; y < len; y++) {
// Iterate over all items in the list, starting at item 1 (item 0 already being used above)
curItem = tempItems[y];
if (endToStartDistance(lastPath, curItem) < minDist || endToEndDistance(lastPath, curItem) < minDist) {
// If either the end / start distance to the current item is smaller than the previously
// measured minDistance, then the current path item becomes the new smallest entry
closest = curItem;
minDist = Math.min(endToStartDistance(lastPath, closest), endToEndDistance(lastPath, closest));
// The new minDistace is set
delIndex = y; // And the item is marked for removal from the list at the end of the loop.
}
}
if (endToEndDistance(lastPath, closest) < endToStartDistance(lastPath, closest)) {
reversePaths(closest); // If the smallest distance is yielded from the end of the previous path
// To the end of the next path, reverse the next path so that the
// end-to-start distance between paths is minimised.
}
closest.move(newLayer, ElementPlacement.PLACEATBEGINNING); // Move the closest path item to the beginning of the new layer
lastPath = closest; // The moved path item becomes the next item in the chain, and is stored as the previous item
// (lastPath) for when the loop iterates again.
tempItems.splice(delIndex, 1); // Remove the item identified as closest in the previous loop from the list of
// items to iterate over. When there are no items left in the list
// The loop ends.
}
}
function reversePaths(theItems) { // This code taken / adapted from https://gist.github.com/Grsmto/bfe1541957a0bb17972d
if (theItems.typename == "PathItem" && !theItems.locked && !theItems.parent.locked && !theItems.layer.locked) {
pathLen = theItems.pathPoints.length;
for (k = 0; k < pathLen / 2; k++) {
h = pathLen - k - 1;
HintenAnchor = theItems.pathPoints[h].anchor;
HintenLeft = theItems.pathPoints[h].leftDirection;
HintenType = theItems.pathPoints[h].pointType;
HintenRight = theItems.pathPoints[h].rightDirection;
theItems.pathPoints[h].anchor = theItems.pathPoints[k].anchor;
theItems.pathPoints[h].leftDirection = theItems.pathPoints[k].rightDirection;
theItems.pathPoints[h].pointType = theItems.pathPoints[k].pointType;
theItems.pathPoints[h].rightDirection = theItems.pathPoints[k].leftDirection;
theItems.pathPoints[k].anchor = HintenAnchor;
theItems.pathPoints[k].leftDirection = HintenRight;
theItems.pathPoints[k].pointType = HintenType;
theItems.pathPoints[k].rightDirection = HintenLeft;
}
}
}
var allPaths = []; // Grab every line in the document
for (var i = 0; i < documents[0].pathItems.length; i++) {
allPaths.push(documents[0].pathItems[i]);
// This could be better changed to the selected objects, or to filter only objects below a certain
// stroke weight so that raster paths are not affected, but cut paths are.
}
Optimize(allPaths); // Feed all paths in the document into the optimize function.
}
main(); // Call the main function, executing the above code.

Javascript - Merge Sort Visualizer using CSS Style to Sort, having issues

I am having an issue with my merge sort visualizer.
My program has no issues visualizing bubble sort or quick sort, as I can do the swapping operation of css property values in-place, but I am having major issues trying to get merge sort to work properly. The issue arises when I try to update a css property on the dom, it causes the sort to not function.
I have tried passing in copies of the data I wish to sort, and all sorts of weird things I could think of to make it work. I am currently trying to sort by the css property 'maxWidth'. I use that to display how large a div element is in the html file and then visualize the sort from there.
My latest thought has been to set all the div elements to have another css property equal to the maxWidth (I am using fontSize as it does not affect my program) and then sorting based on fontSize, allowing me in theory to change the maxWidth properties of the divs without affecting merge sorts algorithm.
I am including my entire js file as I hope reading my correctly working bubble sort or quick sort functions can help you see what I am trying to achieve. Thank you so much for taking the time to read this and offer any help!
Important Note: I am not trying to visualize the individual steps of merge sort yet because I am unable to update the final result to the html page without affecting the merge sort algorithm. According to console logs, my merge sort algorithm does indeed work, I just can't update the DOM without messing it up. Once I can do that, I will turn it into an asynchronous function using async and await like I previously did with bubble and quick sort.
/********* Generate and Store Divs to be Sorted *************/
const generateSortingDivs = (numOfDivs) => {
const divContainer = document.querySelector('.div-container');
let html = '';
for (let i = 0; i < numOfDivs; i++) {
let r = Math.floor(Math.random() * 100);
html += `<div class='sorting-div' id='id-${i}' style='max-width: ${r}%'>&nbsp</div>`;
}
divContainer.innerHTML = html;
for(let i = 0; i < numOfDivs; i++) {
let x = document.getElementById('id-' + i);
x.style.fontSize = x.style.maxWidth;
}
}
const storeSortingDivs = () => {
const divContainer = document.querySelector('.div-container');
let divCollection = [];
const numOfDivs = divContainer.childElementCount;
for(let i=0; i<numOfDivs; i++) {
let div = document.getElementById('id-' + i);
divCollection.push(div);
}
return divCollection;
}
/********** SLEEP FUNCTION ************/
//Used to allow asynchronous visualizations of synchronous tasks
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
/******* SWAP FUNCTIONS *********/
//Used for Testing Algorithm before Animating Visualization
const syncSwap = (div1, div2) => {
let tmp = div1.style.maxWidth;
div1.style.maxWidth = div2.style.maxWidth;
div2.style.maxWidth = tmp;
}
async function asyncSwap(div1, div2) {
await sleep(50);
let tmp = div1.style.maxWidth;
div1.style.maxWidth = div2.style.maxWidth;
div2.style.maxWidth = tmp;
}
const swapDivs = (smallerDiv, biggerDiv) => {
return new Promise(resolve => {
setTimeout(() => {
let tmp = smallerDiv.style.maxWidth;
smallerDiv.style.maxWidth = biggerDiv.style.maxWidth;
biggerDiv.style.maxWidth = tmp;
resolve();
}, 50);
});
}
/****************************************/
/*********** SORTING ALGO'S *************/
/****************************************/
/******* BUBBLE SORT ***********/
async function bubbleSort(divCollection) {
displayBubbleSortInfo();
const len = divCollection.length;
for(let i=0; i<len; i++) {
for(let j=0; j<len-i-1; j++) {
divCollection[j].style.backgroundColor = "#FF4949";
divCollection[j+1].style.backgroundColor = "#FF4949";
let numDiv1 = parseInt(divCollection[j].style.maxWidth);
let numDiv2 = parseInt(divCollection[j+1].style.maxWidth);
let div1 = divCollection[j];
let div2 = divCollection[j+1];
if(numDiv1 > numDiv2) {
await swapDivs(div2, div1);
}
divCollection[j].style.backgroundColor = "darkcyan";
divCollection[j+1].style.backgroundColor = "darkcyan";
}
divCollection[len - i - 1].style.backgroundColor = 'black';
}
}
function displayBubbleSortInfo(){
const infoDiv = document.querySelector('.algo-info');
let html = `<h1>Bubble Sort Visualizer</h1>`;
html += `<h2>Time Complexity: O(n^2)</h2>`;
html += `<h3>Space Complexity: O(1)</h3>`;
html += `<p>This sorting algorithm loops through the array and continues to push the
largest found element into the last position, also pushing the last available
position down by one on each iteration. It is guaranteed to run in exactly
O(n^2) time because it is a nested loop that runs completely through.</p>`;
infoDiv.innerHTML = html;
}
/****** QUICK SORT ********/
async function quickSort(divCollection, start, end) {
if(start >= end) return;
let partitionIndex = await partition(divCollection, start, end);
await Promise.all([quickSort(divCollection, start, partitionIndex - 1), quickSort(divCollection, partitionIndex + 1, end)]);
}
/* This function takes last element as pivot, places
the pivot element at its correct position in sorted
array, and places all smaller (smaller than pivot)
to left of pivot and all greater elements to right
of pivot */
async function partition(divCollection, start, end) {
let pivotIndex = start;
let pivotValue = parseInt(divCollection[end].style.maxWidth);
for(let i = start; i < end; i++) {
if(parseInt(divCollection[i].style.maxWidth) < pivotValue) {
await asyncSwap(divCollection[i], divCollection[pivotIndex]);
pivotIndex++;
}
}
await asyncSwap(divCollection[pivotIndex], divCollection[end]);
return pivotIndex;
}
function displayQuickSortInfo(){
const infoDiv = document.querySelector('.algo-info');
let html = `<h1>Quick Sort Visualizer</h1>`;
html += `<h2>Time Complexity: O(n log n)</h2>`;
html += `<h3>Space Complexity: O(log n)</h3>`;
html += `<p>This sorting algorithm uses the idea of a partition to sort
each iteration recursively. You can implement quick sort
in a variety of manners based on the method in which you
pick your "pivot" value to partition the array. In this
visualization, I implemented the method that chooses the
last element of the array as the pivot value. You could
also choose the first value, the middle value, or the median
value based on the first, middle, and last values.</p>`;
infoDiv.innerHTML = html;
}
/* Merge Sort does not sort in place, and thus we have to be
* clever when implementing it and also editing the css style
* of our divs to show the visualization of how the algorithm
* works. My method is to store a copy of the divs, that way
* I can use one to be sorted by merge sort, and the other to
* change the css style property to show the visualization.
* Unlike Quick Sort and Bubble Sort, we are not swapping
* elements when sorting, instead we are merging entire
* arrays together as the name implies. */
function mergeSort(divCollection) {
if(divCollection.length < 2) return divCollection;
let middleIndex = Math.floor(divCollection.length / 2);
let left = divCollection.slice(0, middleIndex);
let right = divCollection.slice(middleIndex);
return merge(mergeSort(left), mergeSort(right));
}
function merge(left, right) {
let mergedCollection = [];
while(left.length && right.length) {
if(parseInt(left[0].style.fontSize) < parseInt(right[0].style.fontSize || right.length === 0)) {
let el = left.shift();
mergedCollection.push(el);
} else {
let el = right.shift();
mergedCollection.push(el);
}
}
let res = mergedCollection.concat(left.slice().concat(right.slice()));
return res;
}
/***** INITIALIZATION FUNCTION *******/
generateSortingDivs(10);
let divs = storeSortingDivs();
let copyDivs = [...divs];
console.log('Original State: ')
console.log(divs);
//bubbleSort(divs);
//displayQuickSortInfo();
//quickSort(divs, 0, divs.length-1);
let x = mergeSort(copyDivs);
console.log('Sorted: ');
console.log(x);

How V8 optimise code using hidden classes and inline caching

Recently I came across the concept of hidden classes and inline caching used by V8 to optimise js code. Cool.
I understand that objects are represented as hidden classes internally. And two objects may have same properties but different hidden classes (depending upon the order in which properties are assigned).
Also V8 uses inline caching concept to directly check offset to access properties of object rather than using object's hidden class to determine offsets.
Code -
function Point(x, y) {
this.x = x;
this.y = y;
}
function processPoint(point) {
// console.log(point.x, point.y, point.a, point.b);
// let x = point;
}
function main() {
let p1 = new Point(1, 1);
let p2 = new Point(1, 1);
let p3 = new Point(1, 1);
const N = 300000000;
p1.a = 1;
p1.b = 1;
p2.b = 1;
p2.a = 1;
p3.a = 1;
p3.b = 1;
let start_1 = new Date();
for(let i = 0; i< N; i++ ) {
if (i%4 != 0) {
processPoint(p1);
} else {
processPoint(p2)
}
}
let end_1 = new Date();
let t1 = (end_1 - start_1);
let start_2 = new Date();
for(let i = 0; i< N; i++ ) {
if (i%4 != 0) {
processPoint(p1);
} else {
processPoint(p1)
}
}
let end_2 = new Date();
let t2 = (end_2 - start_2);
let start_3 = new Date();
for(let i = 0; i< N; i++ ) {
if (i%4 != 0) {
processPoint(p1);
} else {
processPoint(p3)
}
}
let end_3 = new Date();
let t3 = (end_3 - start_3);
console.log(t1, t2, t3);
}
(function(){
main();
})();
I was expecting results to be like t1 > (t2 = t3) because :
first loop : V8 will try to optimise after running twice but it will soon encounter different hidden class so it will de optimise.
second loop : same object is called all the time so inline caching can be used.
third loop : same as second loop because hidden classes are same.
But results are not satisfying. I got (and similar results running again and again) -
3553 4805 4556
Questions :
Why results were not as expected? Where did my assumptions go wrong?
How can I change this code to demonstrate hidden classes and inline caching performance improvements?
Did I get it all wrong from the starting?
Are hidden classes present just for memory efficiency by letting objects share them?
Any other sites with some simple examples of performance improvements?
I am using node 8.9.4 for testing. Thanks in advance.
Sources :
https://blog.sessionstack.com/how-javascript-works-inside-the-v8-engine-5-tips-on-how-to-write-optimized-code-ac089e62b12e
https://draft.li/blog/2016/12/22/javascript-engines-hidden-classes/
https://richardartoul.github.io/jekyll/update/2015/04/26/hidden-classes.html
and many more..
V8 developer here. The summary is: Microbenchmarking is hard, don't do it.
First off, with your code as posted, I'm seeing 380 380 380 as the output, which is expected, because function processPoint is empty, so all loops do the same work (i.e., no work) no matter which point object you select.
Measuring the performance difference between monomorphic and 2-way polymorphic inline caches is difficult, because it is not large, so you have to be very careful about what else your benchmark is doing. console.log, for example, is so slow that it'll shadow everything else.
You'll also have to be careful about the effects of inlining. When your benchmark has many iterations, the code will get optimized (after running waaaay more than twice), and the optimizing compiler will (to some extent) inline functions, which can allow subsequent optimizations (specifically: eliminating various things) and thereby can significantly change what you're measuring. Writing meaningful microbenchmarks is hard; you won't get around inspecting generated assembly and/or knowing quite a bit about the implementation details of the JavaScript engine you're investigating.
Another thing to keep in mind is where inline caches are, and what state they'll have over time. Disregarding inlining, a function like processPoint doesn't know or care where it's called from. Once its inline caches are polymorphic, they'll remain polymorphic, even if later on in your benchmark (in this case, in the second and third loop) the types stabilize.
Yet another thing to keep in mind when trying to isolate effects is that long-running functions will get compiled in the background while they run, and will then at some point be replaced on the stack ("OSR"), which adds all sorts of noise to your measurements. When you invoke them with different loop lengths for warmup, they'll still get compiled in the background however, and there's no way to reliably wait for that background job. You could resort to command-line flags intended for development, but then you wouldn't be measuring regular behavior any more.
Anyhow, the following is an attempt to craft a test similar to yours that produces plausible results (about 100 180 280 on my machine):
function Point() {}
// These three functions are identical, but they will be called with different
// inputs and hence collect different type feedback:
function processPointMonomorphic(N, point) {
let sum = 0;
for (let i = 0; i < N; i++) {
sum += point.a;
}
return sum;
}
function processPointPolymorphic(N, point) {
let sum = 0;
for (let i = 0; i < N; i++) {
sum += point.a;
}
return sum;
}
function processPointGeneric(N, point) {
let sum = 0;
for (let i = 0; i < N; i++) {
sum += point.a;
}
return sum;
}
let p1 = new Point();
let p2 = new Point();
let p3 = new Point();
let p4 = new Point();
const warmup = 12000;
const N = 100000000;
let sum = 0;
p1.a = 1;
p2.b = 1;
p2.a = 1;
p3.c = 1;
p3.b = 1;
p3.a = 1;
p4.d = 1;
p4.c = 1;
p4.b = 1;
p4.a = 1;
processPointMonomorphic(warmup, p1);
processPointMonomorphic(1, p1);
let start_1 = Date.now();
sum += processPointMonomorphic(N, p1);
let t1 = Date.now() - start_1;
processPointPolymorphic(2, p1);
processPointPolymorphic(2, p2);
processPointPolymorphic(2, p3);
processPointPolymorphic(warmup, p4);
processPointPolymorphic(1, p4);
let start_2 = Date.now();
sum += processPointPolymorphic(N, p1);
let t2 = Date.now() - start_2;
processPointGeneric(warmup, 1);
processPointGeneric(1, 1);
let start_3 = Date.now();
sum += processPointGeneric(N, p1);
let t3 = Date.now() - start_3;
console.log(t1, t2, t3);

Restore Binary Tree with PreOrder and InOrder - Javascript

Could somebody teach me how to restore a binary tree using Prorder and Inorder arrays. I've seen some examples (none in JavaScript) and they kind of make sense but the recursive call never returns a full tree when I try and write. Would love to see explanations as well. Here's some code to start off:
Creating a tree node uses this:
function Tree(x) {
this.value = x;
this.left = null;
this.right = null;
}
Creating the tree uses this:
function retoreBinaryTree(inorder, preorder) {
}
Some sample input:
inorder = [4,2,1,5,3]
preorder = [1,2,4,3,5,6]
inorder = [4,11,8,7,9,2,1,5,3,6]
preorder = [1,2,4,11,7,8,9,3,5,6]
EDIT I had been working on this for days and was unable to come up with a solution of my own so I searched some out (most were written in Java). I tried to mimic this solution but to no avail.
This is a solution in C++ which I think you could translate without problem:
/* keys are between l_p and r_p in the preorder array
keys are between l_i and r_i in the inorder array
*/
Node * build_tree(int preorder[], long l_p, long r_p,
int inorder[], long l_i, long r_i)
{
if (l_p > r_p)
return nullptr; // arrays sections are empty
Node * root = new Node(preorder[l_p]); // root is first key in preorder
if (r_p == l_p)
return root; // the array section has only a node
// search in the inorder array the position of the root
int i = 0;
for (int j = l_i; j <= r_i; ++j)
if (inorder[j] == preorder[l_p])
{
i = j - l_i;
break;
}
root->left = build_tree(preorder, l_p + 1, l_p + i,
inorder, l_i, l_i + (i - 1));
root->right = build_tree(preorder, l_p + i + 1, r_p,
inorder, l_i + i + 1, r_i);
return root;
}

Get the intersection of n arrays

Using ES6's Set, given two arrays we can get the intersection like so:
let a = new Set([1,2,3])
let b = new Set([1,2,4])
let intersect = new Set([...a].filter(i => b.has(i)));
How can we get the intersection of n arrays?
Update:
I'm trying to wrap my head around this for the following use case. I have a two dimensional array with at least one element.
parts.forEach(part => {
intersection = new Set()
})
How would you get the intersection of each element (array) in parts?
Assuming you have some function function intersect(set1, set2) {...} that can intersect two sets, you can get the intersection of an array of sets using reduce:
function intersect(a, b) {
return new Set(a.filter(i => b.has(i)));
}
var sets = [new Set([1,2,3]), ...];
var intersection = sets.reduce(intersect);
You can create an intersect helper function using a combination of Array methods like .filter(), .map(), and .every().
This answer is inspired by the comment above from Xufox, who mentioned using Array#every in a filter predicate.
function intersect (first = [], ...rest) {
rest = rest.map(array => new Set(array))
return first.filter(e => rest.every(set => set.has(e)))
}
let parts = [
[1, 2, 3],
[1, 2, 4],
[1, 5, 2]
]
console.log(
intersect(...parts)
)
ES6 still has a while
This is the type of function that can easily cause long lags due to excessive amounts of processing. This is more true with the unquestioning and even preferential use of ES6 and array methods like reduce, filter etc, over simple old fashioned loops like while and for.
When calculating the intersection of many sets the amount of work done per iteration should go down if an item has been found not to be part of the intersection. Because forEach can not break you are forced to still iterate all elements. Adding some code to avoid doing the search if the current item has been found to not belong can improve the performance, but it is a real kludge.
The is also the tendency to just create whole new datasets just to remove a single item from an array, set, or map. This is a very bad habit that i see more and more of as people adopt the ES5 way.
Get the intersection of n sets.
So to the problem at hand. Find the intersection of many sets.
Solution B
A typical ES6 solution
function intersectB(firstSet, ...sets) {
// function to intercept two sets
var intersect = (a,b) => {
return new Set([...a].filter(item => b.has(item)))
};
// iterate all sets comparing the first set to each.
sets.forEach(sItem => firstSet = intersect(firstSet, sItem));
// return the result.
return firstSet;
}
var sets = [new Set([1,2,3,4]), new Set([1,2,4,6,8]), new Set([1,3,4,6,8])];
var inter = intersectB(...sets);
console.log([...inter]);
Works well and for the simple test case execution time is under a millisecond. But in my book it is a memory hogging knot of inefficiency, creating arrays, and sets at every line almost and iterating whole sets when the outcome is already known.
Let's give it some more work. 100 sets, with up to 10000 items over 10 tests each with differing amount of matching items. Most of the intercepts will return empty sets.
Warning will cause page to hang up to one whole second... :(
// Create a set of numbers from 0 and < count
// With a odds for any number occurring to be odds
// return as a new set;
function createLargeSet(count,odds){
var numbers = new Set();
while(count-- > 0){
if(Math.random() < odds){
numbers.add(count);
}
}
return numbers;
}
// create a array of large sets
function bigArrayOfSets(setCount,setMaxSize,odds){
var bigSets = [];
for(var i = 0; i < setCount; i ++){
bigSets.push(createLargeSet(setMaxSize,odds));
}
return bigSets;
}
function intersectB(firstSet, ...sets) {
var intersect = (a,b) => {
return new Set([...a].filter(item => b.has(item)))
};
sets.forEach(sItem => firstSet = intersect(firstSet, sItem));
return firstSet;
}
var testSets = [];
for(var i = 0.1; i <= 1; i += 0.1){
testSets.push(bigArrayOfSets(100,10000,i));
}
var now = performance.now();
testSets.forEach(testDat => intersectB(...testDat));
var time = performance.now() - now;
console.log("Execution time : " + time);
Solution A
A better way, not as fancy but much more efficient.
function intersectA(firstSet,...sets) {
var count = sets.length;
var result = new Set(firstSet); // Only create one copy of the set
firstSet.forEach(item => {
var i = count;
var allHave = true;
while(i--){
allHave = sets[i].has(item)
if(!allHave) { break } // loop only until item fails test
}
if(!allHave){
result.delete(item); // remove item from set rather than
// create a whole new set
}
})
return result;
}
Compare
So now let's compare both, if you are feeling lucky try and guess the performance difference, it's a good way to gage your understanding of Javascript execution.
// Create a set of numbers from 0 and < count
// With a odds for any number occurring to be odds
// return as a new set;
function createLargeSet(count,odds){
var numbers = new Set();
while(count-- > 0){
if(Math.random() < odds){
numbers.add(count);
}
}
return numbers;
}
// create a array of large sets
function bigArrayOfSets(setCount,setMaxSize,odds){
var bigSets = [];
for(var i = 0; i < setCount; i ++){
bigSets.push(createLargeSet(setMaxSize,odds));
}
return bigSets;
}
function intersectA(firstSet,...sets) {
var count = sets.length;
var result = new Set(firstSet); // Only create one copy of the set
firstSet.forEach(item => {
var i = count;
var allHave = true;
while(i--){
allHave = sets[i].has(item)
if(!allHave) { break } // loop only until item fails test
}
if(!allHave){
result.delete(item); // remove item from set rather than
// create a whole new set
}
})
return result;
}
function intersectB(firstSet, ...sets) {
var intersect = (a,b) => {
return new Set([...a].filter(item => b.has(item)))
};
sets.forEach(sItem => firstSet = intersect(firstSet, sItem));
return firstSet;
}
var testSets = [];
for(var i = 0.1; i <= 1; i += 0.1){
testSets.push(bigArrayOfSets(100,10000,i));
}
var now = performance.now();
testSets.forEach(testDat => intersectB(...testDat));
var time = performance.now() - now;
console.log("Execution time 'intersectB' : " + time);
var now = performance.now();
testSets.forEach(testDat => intersectA(...testDat));
var time = performance.now() - now;
console.log("Execution time 'intersectA' : " + time);
As you can see using a simple while loop may not be a cool as using filter but the performance benefit is huge, and something to keep in mind next time you are writing that perfect 3 line ES6 array manipulation function. Dont forget about for and while.
The most efficient algorithm for intersecting n arrays is the one implemented in fast_array_intersect. It runs in O(n), where n is the total number of elements in all the arrays.
The base principle is simple: iterate over all the arrays, storing the number of times you see each element in a map. Then filter the smallest array, to return only the elements that have been seen in all the arrays. (source code).
You can use the library with a simple :
import intersect from 'fast_array_intersect'
intersect([[1,2,3], [1,2,6]]) // --> [1,2]
OK i guess the most efficient way of performing the Array intersection is by utilizing a Map or Hash object. Here I test 1000 arrays each with ~1000 random integer items among 1..175 for an intersection. The result is obtained in less than 100msec.
function setIntersection(a){
var m = new Map(),
r = new Set(),
l = a.length;
a.forEach(sa => new Set(sa).forEach(n => m.has(n) ? m.set(n,m.get(n)+1)
: m.set(n,1)));
m.forEach((v,k) => v === l && r.add(k));
return r;
}
var testSets = Array(1000).fill().map(_ => Array(1000).fill().map(_ => ~~(Math.random()*175+1)));
console.time("int");
result = setIntersection(testSets);
console.timeEnd("int");
console.log(JSON.stringify([...result]));

Categories

Resources