Could somebody teach me how to restore a binary tree using Prorder and Inorder arrays. I've seen some examples (none in JavaScript) and they kind of make sense but the recursive call never returns a full tree when I try and write. Would love to see explanations as well. Here's some code to start off:
Creating a tree node uses this:
function Tree(x) {
this.value = x;
this.left = null;
this.right = null;
}
Creating the tree uses this:
function retoreBinaryTree(inorder, preorder) {
}
Some sample input:
inorder = [4,2,1,5,3]
preorder = [1,2,4,3,5,6]
inorder = [4,11,8,7,9,2,1,5,3,6]
preorder = [1,2,4,11,7,8,9,3,5,6]
EDIT I had been working on this for days and was unable to come up with a solution of my own so I searched some out (most were written in Java). I tried to mimic this solution but to no avail.
This is a solution in C++ which I think you could translate without problem:
/* keys are between l_p and r_p in the preorder array
keys are between l_i and r_i in the inorder array
*/
Node * build_tree(int preorder[], long l_p, long r_p,
int inorder[], long l_i, long r_i)
{
if (l_p > r_p)
return nullptr; // arrays sections are empty
Node * root = new Node(preorder[l_p]); // root is first key in preorder
if (r_p == l_p)
return root; // the array section has only a node
// search in the inorder array the position of the root
int i = 0;
for (int j = l_i; j <= r_i; ++j)
if (inorder[j] == preorder[l_p])
{
i = j - l_i;
break;
}
root->left = build_tree(preorder, l_p + 1, l_p + i,
inorder, l_i, l_i + (i - 1));
root->right = build_tree(preorder, l_p + i + 1, r_p,
inorder, l_i + i + 1, r_i);
return root;
}
Related
I am trying to create a simple feed-forward neural network in JavaScript using a tutorial found here. I believe that I followed the tutorial correctly, as when I trained it with an input matrix of [[0,0,1],[0,1,1],[1,0,1],[1,1,1]] and a solution matrix of [[0],[1],[1],[0]] the network performed as expected. However, when I tried to train the network using the MNIST handwritten number database and passing in larger matrices, all of the elements in the output array approached zero. I suspect that this has to do with the dot product of the input array and the weights returning an array filled with large numbers, but my attempts to scale down these number have caused the outputs to approach 1. Would anybody be able to figure out what is going wrong? My neural network has only one hidden layer with 300 neurons. The code snippet below shows the methods for my neural network, because I believe that that is where I am going wrong, but if you want to see my entire messy, undocumented program, it can be found here. I am unfamiliar with the math library that I am using, which means that I made some of my own methods to go along with the math methods.
multMatrices(a, b) returns the product of matrices a and b.
math.multiply(a, b) returns the dot product of the two matrices.
math.add(a, b) and math.subtract(a, b) perform matrix addition and subtraction, and
transposeMatrix(a) returns the transpose of matrix a.
setAll(a, b) performs an operation on every element of matrix a, be it plugging the element into the sigmoid function (1 / (1 + a^-e)) in the case of "sigmoid" or the sigmoid derivative function (a * (1-a)) in the case of "sigmoidDerivitive", or setting it equal to a random value between 0 and 0.05 in the case of "randomlow".
I found that setting weights to a value between 0 and 1 kept the loss at 0.9, so I set them now using "randomlow".
function NeuralNetwork(x, y){
//Initializing the neural network
this.input = x;
this.y = y;
this.sizes = [this.input._size[1], 300, this.y._size[1]];
this.layers = this.sizes.length - 1;
this.lyrs = [this.input];
this.weights = [];
this.dweights = [];
for(var i = 0; i < this.layers; i ++){
this.weights.push(new math.matrix());
this.weights[i].resize([this.sizes[i], this.sizes[i + 1]]);
this.weights[i] = setAll(this.weights[i], "randomlow");
}
this.output = new math.matrix();
this.output.resize(this.y._size);
};
NeuralNetwork.prototype.set = function(x, y){
//I train the network by looping through values from the database and passing them into this function
this.input = x;
this.lyrs = [this.input];
this.y = y;
};
NeuralNetwork.prototype.feedforward = function(){
//Looping through the layers and multiplying them by their respective weights
for(var i = 0; i < this.weights.length; i ++){
this.lyrs[i + 1] = math.multiply(this.lyrs[i], this.weights[i]);
this.lyrs[i + 1] = setAll(this.lyrs[i + 1], "sigmoid");
}
this.output = this.lyrs[this.lyrs.length - 1];
};
NeuralNetwork.prototype.backpropogate = function(){
//Backpropogating the network. I don't fully understand this part
this.antis = [
function(a, b, c){
return(
math.multiply(transposeMatrix(a[0]), multMatrices(math.multiply(multMatrices(math.multiply(math.subtract(b.y, b.output), 2), setAll(b.output, "sigmoidDerivitive")), transposeMatrix(c)), setAll(a[1], "sigmoidDerivitive")))
);
},
function(a, b, c){
return(
math.multiply(transposeMatrix(a[0]), multMatrices(math.multiply(math.subtract(b.y, b.output), 2), setAll(b.output, "sigmoidDerivitive")))
);
}];
this.input = [];
this.weightInput = 0;
for(var i = this.weights.length - 1; i >= 0; --i){
this.input.unshift(this.lyrs[i]);
this.weightInput = (i === this.weights.length - 1 ? 0 : this.weights[i + 1]);
this.dweights[i] = this.antis[i](this.input, this, this.weightInput);
}
for(var i = 0; i < this.dweights.length; i ++){
this.weights[i] = math.add(this.weights[i], this.dweights[i]);
}
};
As always, I appreciate any time spent trying to solve my problem. If my code is unintelligible, don't bother with it. JavaScript probably isn't the best language for this purpose, but I didn't want to follow a tutorial with the same language.
EDIT: This is a potential duplicate of this post, which was answered. If anybody is facing this problem, they should see if the answer over there is of help. As of now I have not tested it with my program.
I am trying to solve the Hackerrank problem Jesse and Cookies:
Jesse loves cookies and wants the sweetness of some cookies to be greater than value 𝑘. To do this, two cookies with the least sweetness are repeatedly mixed. This creates a special combined cookie with:
sweetness = (1 × Least sweet cookie + 2 × 2nd least sweet cookie).
This occurs until all the cookies have a sweetness ≥ 𝑘.
Given the sweetness of a number of cookies, determine the minimum number of operations required. If it is not possible, return −1.
Example
k = 9
A = [2,7,3,6,4,6]
The smallest values are 2, 3.
Remove them then return 2 + 2 × 3 = 8 to the array. Now A = [8,7,6,4,6].
Remove 4, 6 and return 4 + 2 × 6 = 16 to the array. Now A = [16,8,7,6].
Remove 6, 7, return 6 + 2 × 7 = 20 and A = [20,16,8,7].
Finally, remove 8, 7 and return 7 + 2 × 8 = 23 to A. Now A = [23,20,16].
All values are ≥ 𝑘 = 9 so the process stops after 4 iterations. Return 4.
I couldn't find a JavaScript solution or a hint for this problem. My code seems to be working, except that it times out for a large array (input size > 1 million).
Is there a way to make my code more efficient? I think the time complexity is between linear and O(n log n).
My Code:
function cookies(k, A) {
A.sort((a,b)=>a-b)
let ops = 0;
while (A[0] < k && A.length > 1) {
ops++;
let calc = (A[0] * 1) + (A[1] * 2);
A.splice(0, 2);
let inserted = false
if (A.length === 0) { // when the array is empty after splice
A.push(calc);
} else {
for (var i = 0; i < A.length && !inserted; i++) {
if (A[A.length - 1] < calc) {
A.push(calc)
inserted = true
} else if (A[i] >= calc) {
A.splice(i, 0, calc);
inserted = true
}
}
}
}
if (A[0] < k) {
ops = -1;
}
return ops;
}
It is indeed a problem that can be solved efficiently with a heap. As JavaScript has no native heap, just implement your own.
You should also cope with inputs that are huge, but where most values are greater than k. Those should not have to be part of the heap -- it would just make heap operations unnecessarily slower. Also, when cookies are augmented, they only need to enter back into the heap when they are not yet good enough.
Special care needs to be taken when the heap ends up with just one value (less than k). In that case it needs to be checked whether any good cookies were created (and thus did not end up in the heap). If so, then with one more operation the solution has been found. But if not, it means there is no solution and -1 should be returned.
Here is an implementation in JavaScript:
/* MinHeap implementation without payload. */
const MinHeap = {
/* siftDown:
* The node at the given index of the given heap is sifted down in its subtree
* until it does not have a child with a lesser value.
*/
siftDown(arr, i=0, value=arr[i]) {
if (i >= arr.length) return;
while (true) {
// Choose the child with the least value
let j = i*2+1;
if (j+1 < arr.length && arr[j] > arr[j+1]) j++;
// If no child has lesser value, then we've found the spot!
if (j >= arr.length || value <= arr[j]) break;
// Move the selected child value one level up...
arr[i] = arr[j];
// ...and consider the child slot for putting our sifted value
i = j;
}
arr[i] = value; // Place the sifted value at the found spot
},
/* heapify:
* The given array is reordered in-place so that it becomes a valid heap.
* Elements in the given array must have a [0] property (e.g. arrays). That [0] value
* serves as the key to establish the heap order. The rest of such an element is just payload.
* It also returns the heap.
*/
heapify(arr) {
// Establish heap with an incremental, bottom-up process
for (let i = arr.length>>1; i--; ) this.siftDown(arr, i);
return arr;
},
/* pop:
* Extracts the root of the given heap, and returns it (the subarray).
* Returns undefined if the heap is empty
*/
pop(arr) {
// Pop the last leaf from the given heap, and exchange it with its root
return this.exchange(arr, arr.pop());
},
/* exchange:
* Replaces the root node of the given heap with the given node, and returns the previous root.
* Returns the given node if the heap is empty.
* This is similar to a call of pop and push, but is more efficient.
*/
exchange(arr, value) {
if (!arr.length) return value;
// Get the root node, so to return it later
let oldValue = arr[0];
// Inject the replacing node using the sift-down process
this.siftDown(arr, 0, value);
return oldValue;
},
/* push:
* Inserts the given node into the given heap. It returns the heap.
*/
push(arr, value) {
// First assume the insertion spot is at the very end (as a leaf)
let i = arr.length;
let j;
// Then follow the path to the root, moving values down for as long as they
// are greater than the value to be inserted
while ((j = (i-1)>>1) >= 0 && value < arr[j]) {
arr[i] = arr[j];
i = j;
}
// Found the insertion spot
arr[i] = value;
return arr;
}
};
function cookies(k, arr) {
// Remove values that are already OK so to keep heap size minimal
const heap = arr.filter(val => val < k);
let greaterPresent = heap.length < arr.length; // Mark whether there is a good cookie
MinHeap.heapify(heap);
let result = 0;
while (heap.length > 1) {
const newValue = MinHeap.pop(heap) + MinHeap.pop(heap) * 2;
// Only push result back to heap if it still is not great enough
if (newValue < k) MinHeap.push(heap, newValue);
else greaterPresent = true; // Otherwise just mark that we have a good cookie
result++;
}
// If not good cookies were created, then return -1
// Otherwise, if there is still 1 element in the heap, add 1
return greaterPresent ? result + heap.length : -1;
}
// Example run
console.log(cookies(9, [2,7,3,6,4,6])); // 4
I solved it using java. You may adapt to Javascript.
This code does not require using a heap. It just work on the same array passed. Passed all tests for me.
static int cookies(int k, int[] arr) {
/*
* Write your code here.
*/
Arrays.sort(arr);
int i = 0,
c = arr.length,
i0 = 0,
c0 = 0,
op = 0;
while( (arr[i]<k || arr[i0]<k) && (c0-i0 + c-i)>1 ) {
int s1 = i0==c0 || arr[i]<=arr[i0] ? arr[i++] : arr[i0++],
s2 = i0==c0 || (i!=c && arr[i]<=arr[i0]) ? arr[i++] : arr[i0++];
arr[c0++] = s1 + 2*s2;
op++;
if( i==c ) {
i = i0;
c = c0;
c0 = i0;
}
}
return c-i>1 || arr[i]>=k ? op : -1;
}
First of all sort array.
For newly calculated values, store them in the array[i0-c0] range, this new array does not require sorting, because it is already sorted.
When array[i-c] reaches(i==c: true) end, forget it, and work on arr[i0-c0].
I've spent the last 2 days watching youtube videos on neural networks.
In particular, I've been trying to implement a genetic algorithm that will evolve over time, however, most videos seem to be focused on neural networks that are trained, and then used for classification.
Being confused, I decided to simply try to implement the basic structure of the network, and have coded this - in JS, for convenience.
function sigmoid (x) { return 1 / (1 + Math.E ** -x); }
function Brain(inputs, hiddens, outputs) {
this.weights = {
hidden: [],
output: []
};
for (var i = hiddens; i--;) {
this.weights.hidden[i] = [];
for (var w = inputs; w--;) this.weights.hidden[i].push(Math.random());
}
for (var i = outputs; i--;) {
this.weights.output[i] = [];
for (var w = hiddens; w--;) this.weights.output[i].push(Math.random());
}
}
Brain.prototype.compute = function(inputs) {
var hiddenInputs = [];
for (var i = this.weights.hidden.length; i--;) {
var dot = 0;
for (var w = inputs.length; w--;) dot += inputs[w] * this.weights.hidden[i][w];
hiddenInputs[i] = sigmoid(dot);
}
var outputs = [];
for (var i = this.weights.output.length; i--;) {
var dot = 0;
for (var w = this.weights.hidden.length; w--;) dot += hiddenInputs[w] * this.weights.output[i][w];
outputs[i] = sigmoid(dot);
}
return outputs;
}
var brain = new Brain(1,2,1);
brain.compute([1]);
I successfully get values between 0 and 1. And, when I use specific weights, I get the same value each time, for a constant input.
Is the terminology I'm using in code good?
I fear I may simply be observing false positives, and am not actually feeding forward.
Is the sigmoid function appropriately? Should I be using it for a genetic / evolving algorithm?
I noticed that I'm getting results only between 0.5 and 1;
To combine a neural network with a genetic algorithm your best shot is probably NEAT. There is a very good implementation of this algorithm in JS called 'Neataptic', you should be able to fint it on github.
When combining GA with ANN you generally want to not only adjust the weights, but the structure as well.
Sigmoid activation is OK for GA, but in many cases you also want other activation functions, you can find a small list of activation functions on wikipedia or create your own ones.
I am trying to learn graphs well and implemented the following depth-first search in javascript. The DFS function is working ok, but the checkRoutes function is the source of my troubles. The checkRoutes function accepts two inputs and returns true if there is a possible path between two nodes/vertices, and false if not. it does this by starting at a node, checking the adjacency list, and then checking the adjacency lists of every item in the adjacency list via recursion.
My solution works for only one case - when you check two vertices once, but due to the way I'm storing the possibleVertices array globally, "possibleVertices" doesn't get cleared out each time. how could I push and store to the "possibleToVisit" array inside "checkRoute" instead of globally in this class? Would it be better to have this array stored on the constructor?
var possibleToVisit = [];
function dfs(v) {
this.marked[v] = true;
if (this.adj[v] !== undefined) {
console.log("visited vertex " + v);
}
for (var i = 0; i < this.adj[v].length; i++) {
var w = this.adj[v][i];
if (!this.marked[w]) {
possibleToVisit.push(w)
this.dfs(w);
}
}
console.log(possibleToVisit);
}
function checkRoute(v, v2) {
this.dfs(v);
if (possibleToVisit.indexOf(v2) === -1) {
return false;
}
return true;
}
g = new Graph(5);
g.addEdge(0, 1);
g.addEdge(0, 2);
g.addEdge(1, 3);
g.addEdge(2, 4);
// g.showGraph();
// g.dfs(0);
console.log(g.checkRoute(0, 4));//true
console.log(g.checkRoute(0, 5));//false
https://jsfiddle.net/youngfreezy/t1ora6ab/3/#update
You can write a DFS "starter" function, which will reset all variables, and return something if necessary:
function Graph(v) {
this.startDfs = startDfs;
this.possibleToVisit = [];
}
// ...
function startDfs(v) {
this.possibleToVisit = []; // here, you can reset any values
this.dfs(v);
return true; // here, you can return a custom object containing 'possibleToVisit'
}
And call it only using startDfs:
function checkRoute(v, v2) {
this.startDfs(v);
if (this.possibleToVisit.indexOf(v2) === -1) {
return false;
}
return true;
}
Here is the updated JSFiddle.
Arrays in Javascript get passed as reference, so something like
function fill(a,l){
for(var i = 0;i<l;i++)
a.push(i + 10);
}
function check(idx, max){
var arr = [];
fill(arr,max);
console.log(arr[idx]); // 14
}
check(4,10)
would work and everytime check gets called arr is fresh and clean.
You can use a marked[] array (which is filled up during the dfs() call) to determine whether a particular vertex can be reached from a known vertex s.
Please take a look at the depth first search implementation in the following library:
https://github.com/chen0040/js-graph-algorithms
It provides an object oriented approach to the graph creation as well as the depth first search algorithm.
The sample code for its depth first search algorithm is given here:
var jsgraphs = require('js-graph-algorithms');
var g = new jsgraphs.Graph(6);
g.addEdge(0, 5);
g.addEdge(2, 4);
g.addEdge(2, 3);
g.addEdge(1, 2);
g.addEdge(0, 1);
g.addEdge(3, 4);
g.addEdge(3, 5);
g.addEdge(0, 2);
var starting_vertex = 0;
var dfs = new jsgraphs.DepthFirstSearch(g, starting_vertex);
for(var v=1; v < g.V; ++v) {
if(dfs.hasPathTo(v)) {
console.log(s + " is connected to " + v);
console.log("path: " + dfs.pathTo(v));
} else {
console.log('No path from ' + s + ' to ' + v);
}
}
I am working on an implementation of the A-Star algorithm in javascript. It works, however it takes a very large amount of time to create a path between two very close together points: (1,1) to (6,6) it takes several seconds. I would like to know what mistakes I have made in the algorithm and how to resolve these.
My code:
Node.prototype.genNeighbours = function() {
var right = new Node(this.x + 1, this.y);
var left = new Node(this.x - 1, this.y);
var top = new Node(this.x, this.y + 1);
var bottom = new Node(this.x, this.y - 1);
this.neighbours = [right, left, top, bottom];
}
AStar.prototype.getSmallestNode = function(openarr) {
var comp = 0;
for(var i = 0; i < openarr.length; i++) {
if(openarr[i].f < openarr[comp].f) comp = i
}
return comp;
}
AStar.prototype.calculateRoute = function(start, dest, arr){
var open = new Array();
var closed = new Array();
start.g = 0;
start.h = this.manhattanDistance(start.x, dest.x, start.y, dest.y);
start.f = start.h;
start.genNeighbours();
open.push(start);
while(open.length > 0) {
var currentNode = null;
this.getSmallestNode(open);
currentNode = open[0];
if(this.equals(currentNode,dest)) return currentNode;
currentNode.genNeighbours();
var iOfCurr = open.indexOf(currentNode);
open.splice(iOfCurr, 1);
closed.push(currentNode);
for(var i = 0; i < currentNode.neighbours.length; i++) {
var neighbour = currentNode.neighbours[i];
if(neighbour == null) continue;
var newG = currentNode.g + 1;
if(newG < neighbour.g) {
var iOfNeigh = open.indexOf(neighbour);
var iiOfNeigh = closed.indexOf(neighbour);
open.splice(iOfNeigh, 1);
closed.splice(iiOfNeigh,1);
}
if(open.indexOf(neighbour) == -1 && closed.indexOf(neighbour) == -1) {
neighbour.g = newG;
neighbour.h = this.manhattanDistance(neighbour.x, dest.x, neighbour.y, dest.y);
neighbour.f = neighbour.g + neighbour.h;
neighbour.parent = currentNode;
open.push(neighbour);
}
}
}
}
Edit: I've now resolved the problem. It was due to the fact that I was just calling: open.sort(); which wasn't sorting the nodes by their 'f' value. I wrote a custom function and now the algorithm runs quickly.
A few mistakes I've spotted:
Your set of open nodes is not structured in any way so that retrieving the one with the minimal distance is easy. The usual choice for this is to use a priority queue, but inserting new nodes in a sorted order (instead of open.push(neighbour)) should suffice (at first).
in your getSmallestNode function, you may start the loop at index 1
you are calling getSmallestNode(), but not using its results at all. You're only taking currentNode = open[0]; every time (and then even searching for its position to splice it! It's 0!). With the queue, it's just currentNode = open.shift().
However, the most important thing (that could have gone most wrong) is your getNeighbors() function. It does create entirely new node objects every time it is called - ones that were unheard of before, and are not know to your algorithm (or its closed set). They may be in the same position in your grid as other nodes, but they're different objects (which are compared by reference, not by similarity). This means that indexOf will never find those new neighbors in the closed array, and they will get processed over and over (and over). I won't attempt to calculate the complexity of this implementation, but I'd guess its even worse than exponential.
Typically, the A* algorithm is executed on already existing graphs. An OOP-getNeighbors-function would return the references to the existing node objects, instead of creating new ones with the same coordinates. If you need to dynamically generate the graph, you'll need a lookup structure (two-dimensional array?) to store and retrieve already-generated nodes.