Outputs of feed-forward neural network approach 0 - javascript

I am trying to create a simple feed-forward neural network in JavaScript using a tutorial found here. I believe that I followed the tutorial correctly, as when I trained it with an input matrix of [[0,0,1],[0,1,1],[1,0,1],[1,1,1]] and a solution matrix of [[0],[1],[1],[0]] the network performed as expected. However, when I tried to train the network using the MNIST handwritten number database and passing in larger matrices, all of the elements in the output array approached zero. I suspect that this has to do with the dot product of the input array and the weights returning an array filled with large numbers, but my attempts to scale down these number have caused the outputs to approach 1. Would anybody be able to figure out what is going wrong? My neural network has only one hidden layer with 300 neurons. The code snippet below shows the methods for my neural network, because I believe that that is where I am going wrong, but if you want to see my entire messy, undocumented program, it can be found here. I am unfamiliar with the math library that I am using, which means that I made some of my own methods to go along with the math methods.
multMatrices(a, b) returns the product of matrices a and b.
math.multiply(a, b) returns the dot product of the two matrices.
math.add(a, b) and math.subtract(a, b) perform matrix addition and subtraction, and
transposeMatrix(a) returns the transpose of matrix a.
setAll(a, b) performs an operation on every element of matrix a, be it plugging the element into the sigmoid function (1 / (1 + a^-e)) in the case of "sigmoid" or the sigmoid derivative function (a * (1-a)) in the case of "sigmoidDerivitive", or setting it equal to a random value between 0 and 0.05 in the case of "randomlow".
I found that setting weights to a value between 0 and 1 kept the loss at 0.9, so I set them now using "randomlow".
function NeuralNetwork(x, y){
//Initializing the neural network
this.input = x;
this.y = y;
this.sizes = [this.input._size[1], 300, this.y._size[1]];
this.layers = this.sizes.length - 1;
this.lyrs = [this.input];
this.weights = [];
this.dweights = [];
for(var i = 0; i < this.layers; i ++){
this.weights.push(new math.matrix());
this.weights[i].resize([this.sizes[i], this.sizes[i + 1]]);
this.weights[i] = setAll(this.weights[i], "randomlow");
}
this.output = new math.matrix();
this.output.resize(this.y._size);
};
NeuralNetwork.prototype.set = function(x, y){
//I train the network by looping through values from the database and passing them into this function
this.input = x;
this.lyrs = [this.input];
this.y = y;
};
NeuralNetwork.prototype.feedforward = function(){
//Looping through the layers and multiplying them by their respective weights
for(var i = 0; i < this.weights.length; i ++){
this.lyrs[i + 1] = math.multiply(this.lyrs[i], this.weights[i]);
this.lyrs[i + 1] = setAll(this.lyrs[i + 1], "sigmoid");
}
this.output = this.lyrs[this.lyrs.length - 1];
};
NeuralNetwork.prototype.backpropogate = function(){
//Backpropogating the network. I don't fully understand this part
this.antis = [
function(a, b, c){
return(
math.multiply(transposeMatrix(a[0]), multMatrices(math.multiply(multMatrices(math.multiply(math.subtract(b.y, b.output), 2), setAll(b.output, "sigmoidDerivitive")), transposeMatrix(c)), setAll(a[1], "sigmoidDerivitive")))
);
},
function(a, b, c){
return(
math.multiply(transposeMatrix(a[0]), multMatrices(math.multiply(math.subtract(b.y, b.output), 2), setAll(b.output, "sigmoidDerivitive")))
);
}];
this.input = [];
this.weightInput = 0;
for(var i = this.weights.length - 1; i >= 0; --i){
this.input.unshift(this.lyrs[i]);
this.weightInput = (i === this.weights.length - 1 ? 0 : this.weights[i + 1]);
this.dweights[i] = this.antis[i](this.input, this, this.weightInput);
}
for(var i = 0; i < this.dweights.length; i ++){
this.weights[i] = math.add(this.weights[i], this.dweights[i]);
}
};
As always, I appreciate any time spent trying to solve my problem. If my code is unintelligible, don't bother with it. JavaScript probably isn't the best language for this purpose, but I didn't want to follow a tutorial with the same language.
EDIT: This is a potential duplicate of this post, which was answered. If anybody is facing this problem, they should see if the answer over there is of help. As of now I have not tested it with my program.

Related

Can't get Lotka-Volterra equations to oscillate stable with math.js

I'm trying to implement a simple Lotka-Volterra system in JavaScript, but get different result from what I see in academic papers and slides. This is my equations:
sim2.eval("dxdt(x, y) = (2 * x) - (x * y)");
sim2.eval("dydt(x, y) = (-0.25 * y) + (x * y)");
using coefficients a = 2, b = 1, c = 0.25 and d = 1. Yet, my result looks like this:
when I expected a stable oscillation as seen in these PDF slides:
Could it be the implementation of ndsolve that causes this? Or a machine error in JavaScript due to floating-point arithmetic?
Disregard, the error was simply using a too big evaluation step (dt = 0.1, must be 0.01 at least). The numerical method used is known for this problem.
For serious purposes use a higher order method, the minimum is fixed step classical Runge-Kutta. Then you can also use dt=0.1, it is stable for multiple periods, I tried tfinal=300 without problems. However you will see the step size in the graph as it is visibly piecewise linear. This is much reduced with half the step size, dt=0.05.
function odesolveRK4(f, x0, dt, tmax) {
var n = f.size()[0]; // Number of variables
var x = x0.clone(),xh=[]; // Current values of variables
var dxdt = [], k1=[], k2=[], k3=[], k4=[]; // Temporary variable to hold time-derivatives
var result = []; // Contains entire solution
var nsteps = math.divide(tmax, dt); // Number of time steps
dt2 = math.divide(dt,2);
dt6 = math.divide(dt,6);
for(var i=0; i<nsteps; i++) {
// compute the 4 stages if the classical order-4 Runge-Kutta method
k1 = f.map(function(fj) {return fj.apply(null, x.toArray()); } );
xh = math.add(x, math.multiply(k1, dt2));
k2 = f.map(function(fj) {return fj.apply(null, xh.toArray()); } );
xh = math.add(x, math.multiply(k2, dt2));
k3 = f.map(function(fj) {return fj.apply(null, xh.toArray()); } );
xh = math.add(x, math.multiply(k3, dt));
k4 = f.map(function(fj) {return fj.apply(null, xh.toArray()); } );
x = math.add(x, math.multiply(math.add(math.add(k1,k4), math.multiply(math.add(k2,k3),2)), dt6))
if( 0==i%50) console.log("%3d %o %o",i,dt,x.toString());
result.push(x.clone());
}
return math.matrix(result);
}
math.import({odesolveRK4:odesolveRK4});

how to optimize a JavaScript random normal generation algo

Here is my code. This function returns a number that follows the standard normal distribution.
var rnorm = function() {
var n = 1000;
var x = 0;
var ss = 0;
for (var i = 0; i < n; i++) {
var a = Math.random();
x += a;
ss += Math.pow(a-0.5, 2);
}
var xbar = x/n;
var v = ss/n;
var sd = Math.sqrt(v);
return (xbar-0.5)/(sd/Math.sqrt(n));
};
It is simple and exploits the central limit theorem. Here is a jsfiddle running this thing 100,000 times and printing some info to the console.
https://jsfiddle.net/qx61fqpn/
It looks about right (I haven't written code to draw a histogram yet). The right proportions appear between (-1, 1), (-2, 2), (-3, 3) and beyond. The sample mean is 0, var & sd are 1, skew is 0 and kurtosis is 3.
But it's kind of slow.
I intend to write more complex functions using this one as a building block (like the chi-sq distribution).
My question is:
How can I rewrite this code to improve its performance? I'm talking speed. Right now, it's written to be clear to someone with a bit of JavaScript and statistics knowledge what I'm doing (or trying to do - in the event I've done it wrong).

Is this a neural network

I've spent the last 2 days watching youtube videos on neural networks.
In particular, I've been trying to implement a genetic algorithm that will evolve over time, however, most videos seem to be focused on neural networks that are trained, and then used for classification.
Being confused, I decided to simply try to implement the basic structure of the network, and have coded this - in JS, for convenience.
function sigmoid (x) { return 1 / (1 + Math.E ** -x); }
function Brain(inputs, hiddens, outputs) {
this.weights = {
hidden: [],
output: []
};
for (var i = hiddens; i--;) {
this.weights.hidden[i] = [];
for (var w = inputs; w--;) this.weights.hidden[i].push(Math.random());
}
for (var i = outputs; i--;) {
this.weights.output[i] = [];
for (var w = hiddens; w--;) this.weights.output[i].push(Math.random());
}
}
Brain.prototype.compute = function(inputs) {
var hiddenInputs = [];
for (var i = this.weights.hidden.length; i--;) {
var dot = 0;
for (var w = inputs.length; w--;) dot += inputs[w] * this.weights.hidden[i][w];
hiddenInputs[i] = sigmoid(dot);
}
var outputs = [];
for (var i = this.weights.output.length; i--;) {
var dot = 0;
for (var w = this.weights.hidden.length; w--;) dot += hiddenInputs[w] * this.weights.output[i][w];
outputs[i] = sigmoid(dot);
}
return outputs;
}
var brain = new Brain(1,2,1);
brain.compute([1]);
I successfully get values between 0 and 1. And, when I use specific weights, I get the same value each time, for a constant input.
Is the terminology I'm using in code good?
I fear I may simply be observing false positives, and am not actually feeding forward.
Is the sigmoid function appropriately? Should I be using it for a genetic / evolving algorithm?
I noticed that I'm getting results only between 0.5 and 1;
To combine a neural network with a genetic algorithm your best shot is probably NEAT. There is a very good implementation of this algorithm in JS called 'Neataptic', you should be able to fint it on github.
When combining GA with ANN you generally want to not only adjust the weights, but the structure as well.
Sigmoid activation is OK for GA, but in many cases you also want other activation functions, you can find a small list of activation functions on wikipedia or create your own ones.

reordering objects without impacting other objects

I have a list of items (think, files in a directory), where the order of these items is arbitrarily managed by a user. The user can insert an item between other items, delete items, and move them around.
What is the best way to store the ordering as a property of each item so that when a specific item is inserted or moved, the ordering property of the other items is not affected? These objects will be stored in a database.
An ideal implementation would be able to support inifinite number of insertions/reorders.
The test I'm using to identify the limitations of the approach are as follows:
With 3 items x,y,z, repeatedly take the item on the left and put it between the other two; then take the object on the right and put it between the other two; keep going until some constraint is violated.
For others' reference, I have included some algorithms I have tried.
1.1. Decimals, double-precision
Store the order as a decimal. To insert an between two items with orders x and y, calculate its order as x/2+y/2.
Limitations:
Precision, or performance. Using doubles, when the denominator becomes too big, we end up with x/2+y/2==x . In Javascript, it can only handle 25 shuffles.
function doubles(x,y,z) {
for (var i = 0; i < 10000000; i++) {
//x,y,z
//x->v1: y,v1,z
//z->v2: y,v2,v1
var v1 = y/2 + z/2
var v2 = y/2 + v1/2
x = y
y = v2
z = v1
if (x == y) {
console.log(i)
break
}
}
}
>doubles(1, 1.5, 2)
>25
1.2. Decimals, BigDecimal
The same as above, but using BigDecimal from https://github.com/iriscouch/bigdecimal.js. In my test, the performance degraded unusably quickly. It might be a good choice for other frameworks, but not for client-side javascript.
I threw that implementation away and don't have it anymore.
2.1. Fractions
Store the order as a (numerator, denominator) integer tuple. To insert an item between items xN/xD and yN/yD, give it a value of (xN+yN)/(xD+yD) (which can easily be shown to be between the other two numbers).
Limitations:
precision or overflow.
function fractions(xN, xD, yN, yD, zN, zD){
for (var i = 0; i < 10000000; i++) {
//x,y,z
//x->v1: y,v1,z
//z->v2: y,v2,v1
var v1N = yN + zN, v1D = yD + zD
var v2N = yN + v1N, v2D = yD + v1D
xN = yN, xD=yD
yN = v2N, yD=v2D
zN = v1N, zd=v1D
if (!isFinite(xN) || !isFinite(xD)) { // overflow
console.log(i)
break
}
if (xN/xD == yN/yD) { //precision
console.log(i)
break
}
}
}
>fractions(1,1,3,2,2,1)
>737
2.2. Fractions with GCD reduction
The same as above, but reduce fractions using a Greatest Common Denomenator algorithm:
function gcd(x, y) {
if(!isFinite(x) || !isFinite(y)) {
return NaN
}
while (y != 0) {
var z = x % y;
x = y;
y = z;
}
return x;
}
function fractionsGCD(xN, xD, yN, yD, zN, zD) {
for (var i = 0; i < 10000000; i++) {
//x,y,z
//x->v1: y,v1,z
//z->v2: y,v2,v1
var v1N = yN + zN, v1D = yD + zD
var v2N = yN + v1N, v2D = yD + v1D
var v1gcd=gcd(v1N, v1D)
var v2gcd=gcd(v2N, v2D)
xN = yN, xD = yD
yN = v2N/v2gcd, yD=v2D/v2gcd
zN = v1N/v1gcd, zd=v1D/v1gcd
if (!isFinite(xN) || !isFinite(xD)) { // overflow
console.log(i)
break
}
if (xN/xD == yN/yD) { //precision
console.log(i)
break
}
}
}
>fractionsGCD(1,1,3,2,2,1)
>6795
3. Alphabetic
Use alphabetic ordering. The idea is to start with an alphabet (say, ascii printable range of [32..126]), and grow the strings. So, ('O' being the middle of our range), to insert between "a" and "c", use "b", to insert between "a" and "b", use "aO", and so forth.
Limitations:
The strings would get so long as to not fit in a database.
function middle(low, high) {
for(var i = 0; i < high.length; i++) {
if (i == low.length) {
//aa vs aaf
lowcode=32
hicode = high.charCodeAt(i)
return low + String.fromCharCode( (hicode - lowcode) / 2)
}
lowcode = low.charCodeAt(i)
hicode = high.charCodeAt(i)
if(lowcode==hicode) {
continue
}
else if(hicode - lowcode == 1) {
// aa vs ab
return low + 'O';
} else {
// aa vs aq
return low.slice(0,i) + String.fromCharCode(lowcode + (hicode - lowcode) / 2)
}
}
}
function alpha(x,y,z, N) {
for (var i = 0; i < 10000; i++) {
//x,y,z
//x->v1: y,v1,z
//z->v2: y,v2,v1
var v1 = middle(y, z)
var v2 = middle(y, v1)
x = y
y = v2
z = v1
if(x.length > N) {
console.log(i)
break
}
}
}
>alpha('?', 'O', '_', 256)
1023
>alpha('?', 'O', '_', 512)
2047
Perhaps I have missed something fundamental and I will admit I know little enough about javascript, but surely you can just implement a doubly-linked list to deal with this? Then re-ordering a,b,c,d,e,f,g,h to insert X between d and e you just unlink d->e, link d->X and then link X->e and so on.
Because in any of the scenarios above, either you will run out of precision (and your infinite ordering is lost) or you'll end up with very long sort identifiers and no memory :)
Software axiom #1: KEEP IT SIMPLE until you have found a compelling, real and proven reason to make it more complicated.
So, I'd argue that it's extra and unnecessary code and maintenance to maintain your own order property when the DOM is already doing it for you. Why not just let the DOM maintain the order and you can dynamically generate a set of brain-dead simple sequence numbers for the current ordering any time you need it? CPUs are plenty fast to generate new sequence numbers for all items anytime you need it or anytime it changes. And, if you want to save this new ordering on the server, just send the whole sequence to the server.
Implementing one of these splitting sequences so you can always insert more objects without ever renumbering anything is going to be a lot of code and a lot of opportunities for bugs. You should not go there until it's been proven that you really need that level of complication.
Store the items in an array, and use splice() to insert and delete elements.
Or is this not acceptable because of the comment you made in response to the linked list answer?
The problem you are trying to solve is potentially insertion sort which has a simple implementation of O(n^2). But there are ways to improve it.
Suppose there is an order variable associated to each element. You can assign these orders smartly with large gaps between variables and get an amortized O(n*log(n)) mechanism. Look at (Insertion sort is nlogn)

JavaScript Noise Function Problems

I've been trying to learn about generating noise and find that I understand most of it but I'm having a bit of trouble with a script.
I used this page as a guide to write this script in JavaScript with the ultimate purpose of creating some noise on canvas.
It's definitely creating something but it's tucked all the way over on the left. Also, refreshing the page seems to create the same pattern over and over again.
What have I done wrong that the "noisy" part of the image is smushed on the left? How can I make it look more like the cloudy perlin noise?
I don't really understand why it doesn't produce a new pattern each time. What would I need to change in order to receive a random pattern each time the script is run?
Thank you for your help!
/* NOISE—Tie it all together
*/
function perlin2d(x,y){
var total = 0;
var p = persistence;
var n = octaves - 1;
for(var i = 0; i <= n; i++) {
var frequency = Math.pow(2, i);
var amplitude = Math.pow(p, i);
total = total + interpolatenoise(x * frequency, y * frequency) * amplitude;
}
return total;
}
I've forked your fiddle and fixed a couple things to make it work: http://jsfiddle.net/KkDVr/2/
The main problem was the flawed pseudorandom generator "noise", that always returned 1 for large enough values of x and y. I've replaced it with a random values table that is queried with integer coordinates:
var values = [];
for(var i = 0; i < height; i++) {
values[i] = [];
for(var j = 0; j < width; j++) {
values[i][j] = Math.random() * 2 - 1;
}
}
function noise(x, y) {
x = parseInt(Math.min(width - 1, Math.max(0, x)));
y = parseInt(Math.min(height - 1, Math.max(0, y)));
return values[x][y];
}
However, the implementation provided in the tutorial you followed uses simplified algorithms that are really poorly optimized. I suggest you the excellent real-world noise tutorial at http://scratchapixel.com/lessons/3d-advanced-lessons/noise-part-1.
Finally, maybe you could be interested in a project of mine: http://lencinhaus.github.com/canvas-noise.
It's a javascript app that renders perlin noise on an html5 canvas and allows to tweak almost any parameter visually. I've ported the original noise algorithm implementation by Ken Perlin to javascript, so that may be useful for you. You can find the source code here: https://github.com/lencinhaus/canvas-noise/tree/gh-pages.
Hope that helps, bye!

Categories

Resources