Is this a neural network - javascript

I've spent the last 2 days watching youtube videos on neural networks.
In particular, I've been trying to implement a genetic algorithm that will evolve over time, however, most videos seem to be focused on neural networks that are trained, and then used for classification.
Being confused, I decided to simply try to implement the basic structure of the network, and have coded this - in JS, for convenience.
function sigmoid (x) { return 1 / (1 + Math.E ** -x); }
function Brain(inputs, hiddens, outputs) {
this.weights = {
hidden: [],
output: []
};
for (var i = hiddens; i--;) {
this.weights.hidden[i] = [];
for (var w = inputs; w--;) this.weights.hidden[i].push(Math.random());
}
for (var i = outputs; i--;) {
this.weights.output[i] = [];
for (var w = hiddens; w--;) this.weights.output[i].push(Math.random());
}
}
Brain.prototype.compute = function(inputs) {
var hiddenInputs = [];
for (var i = this.weights.hidden.length; i--;) {
var dot = 0;
for (var w = inputs.length; w--;) dot += inputs[w] * this.weights.hidden[i][w];
hiddenInputs[i] = sigmoid(dot);
}
var outputs = [];
for (var i = this.weights.output.length; i--;) {
var dot = 0;
for (var w = this.weights.hidden.length; w--;) dot += hiddenInputs[w] * this.weights.output[i][w];
outputs[i] = sigmoid(dot);
}
return outputs;
}
var brain = new Brain(1,2,1);
brain.compute([1]);
I successfully get values between 0 and 1. And, when I use specific weights, I get the same value each time, for a constant input.
Is the terminology I'm using in code good?
I fear I may simply be observing false positives, and am not actually feeding forward.
Is the sigmoid function appropriately? Should I be using it for a genetic / evolving algorithm?
I noticed that I'm getting results only between 0.5 and 1;

To combine a neural network with a genetic algorithm your best shot is probably NEAT. There is a very good implementation of this algorithm in JS called 'Neataptic', you should be able to fint it on github.
When combining GA with ANN you generally want to not only adjust the weights, but the structure as well.
Sigmoid activation is OK for GA, but in many cases you also want other activation functions, you can find a small list of activation functions on wikipedia or create your own ones.

Related

How V8 optimise code using hidden classes and inline caching

Recently I came across the concept of hidden classes and inline caching used by V8 to optimise js code. Cool.
I understand that objects are represented as hidden classes internally. And two objects may have same properties but different hidden classes (depending upon the order in which properties are assigned).
Also V8 uses inline caching concept to directly check offset to access properties of object rather than using object's hidden class to determine offsets.
Code -
function Point(x, y) {
this.x = x;
this.y = y;
}
function processPoint(point) {
// console.log(point.x, point.y, point.a, point.b);
// let x = point;
}
function main() {
let p1 = new Point(1, 1);
let p2 = new Point(1, 1);
let p3 = new Point(1, 1);
const N = 300000000;
p1.a = 1;
p1.b = 1;
p2.b = 1;
p2.a = 1;
p3.a = 1;
p3.b = 1;
let start_1 = new Date();
for(let i = 0; i< N; i++ ) {
if (i%4 != 0) {
processPoint(p1);
} else {
processPoint(p2)
}
}
let end_1 = new Date();
let t1 = (end_1 - start_1);
let start_2 = new Date();
for(let i = 0; i< N; i++ ) {
if (i%4 != 0) {
processPoint(p1);
} else {
processPoint(p1)
}
}
let end_2 = new Date();
let t2 = (end_2 - start_2);
let start_3 = new Date();
for(let i = 0; i< N; i++ ) {
if (i%4 != 0) {
processPoint(p1);
} else {
processPoint(p3)
}
}
let end_3 = new Date();
let t3 = (end_3 - start_3);
console.log(t1, t2, t3);
}
(function(){
main();
})();
I was expecting results to be like t1 > (t2 = t3) because :
first loop : V8 will try to optimise after running twice but it will soon encounter different hidden class so it will de optimise.
second loop : same object is called all the time so inline caching can be used.
third loop : same as second loop because hidden classes are same.
But results are not satisfying. I got (and similar results running again and again) -
3553 4805 4556
Questions :
Why results were not as expected? Where did my assumptions go wrong?
How can I change this code to demonstrate hidden classes and inline caching performance improvements?
Did I get it all wrong from the starting?
Are hidden classes present just for memory efficiency by letting objects share them?
Any other sites with some simple examples of performance improvements?
I am using node 8.9.4 for testing. Thanks in advance.
Sources :
https://blog.sessionstack.com/how-javascript-works-inside-the-v8-engine-5-tips-on-how-to-write-optimized-code-ac089e62b12e
https://draft.li/blog/2016/12/22/javascript-engines-hidden-classes/
https://richardartoul.github.io/jekyll/update/2015/04/26/hidden-classes.html
and many more..
V8 developer here. The summary is: Microbenchmarking is hard, don't do it.
First off, with your code as posted, I'm seeing 380 380 380 as the output, which is expected, because function processPoint is empty, so all loops do the same work (i.e., no work) no matter which point object you select.
Measuring the performance difference between monomorphic and 2-way polymorphic inline caches is difficult, because it is not large, so you have to be very careful about what else your benchmark is doing. console.log, for example, is so slow that it'll shadow everything else.
You'll also have to be careful about the effects of inlining. When your benchmark has many iterations, the code will get optimized (after running waaaay more than twice), and the optimizing compiler will (to some extent) inline functions, which can allow subsequent optimizations (specifically: eliminating various things) and thereby can significantly change what you're measuring. Writing meaningful microbenchmarks is hard; you won't get around inspecting generated assembly and/or knowing quite a bit about the implementation details of the JavaScript engine you're investigating.
Another thing to keep in mind is where inline caches are, and what state they'll have over time. Disregarding inlining, a function like processPoint doesn't know or care where it's called from. Once its inline caches are polymorphic, they'll remain polymorphic, even if later on in your benchmark (in this case, in the second and third loop) the types stabilize.
Yet another thing to keep in mind when trying to isolate effects is that long-running functions will get compiled in the background while they run, and will then at some point be replaced on the stack ("OSR"), which adds all sorts of noise to your measurements. When you invoke them with different loop lengths for warmup, they'll still get compiled in the background however, and there's no way to reliably wait for that background job. You could resort to command-line flags intended for development, but then you wouldn't be measuring regular behavior any more.
Anyhow, the following is an attempt to craft a test similar to yours that produces plausible results (about 100 180 280 on my machine):
function Point() {}
// These three functions are identical, but they will be called with different
// inputs and hence collect different type feedback:
function processPointMonomorphic(N, point) {
let sum = 0;
for (let i = 0; i < N; i++) {
sum += point.a;
}
return sum;
}
function processPointPolymorphic(N, point) {
let sum = 0;
for (let i = 0; i < N; i++) {
sum += point.a;
}
return sum;
}
function processPointGeneric(N, point) {
let sum = 0;
for (let i = 0; i < N; i++) {
sum += point.a;
}
return sum;
}
let p1 = new Point();
let p2 = new Point();
let p3 = new Point();
let p4 = new Point();
const warmup = 12000;
const N = 100000000;
let sum = 0;
p1.a = 1;
p2.b = 1;
p2.a = 1;
p3.c = 1;
p3.b = 1;
p3.a = 1;
p4.d = 1;
p4.c = 1;
p4.b = 1;
p4.a = 1;
processPointMonomorphic(warmup, p1);
processPointMonomorphic(1, p1);
let start_1 = Date.now();
sum += processPointMonomorphic(N, p1);
let t1 = Date.now() - start_1;
processPointPolymorphic(2, p1);
processPointPolymorphic(2, p2);
processPointPolymorphic(2, p3);
processPointPolymorphic(warmup, p4);
processPointPolymorphic(1, p4);
let start_2 = Date.now();
sum += processPointPolymorphic(N, p1);
let t2 = Date.now() - start_2;
processPointGeneric(warmup, 1);
processPointGeneric(1, 1);
let start_3 = Date.now();
sum += processPointGeneric(N, p1);
let t3 = Date.now() - start_3;
console.log(t1, t2, t3);

Outputs of feed-forward neural network approach 0

I am trying to create a simple feed-forward neural network in JavaScript using a tutorial found here. I believe that I followed the tutorial correctly, as when I trained it with an input matrix of [[0,0,1],[0,1,1],[1,0,1],[1,1,1]] and a solution matrix of [[0],[1],[1],[0]] the network performed as expected. However, when I tried to train the network using the MNIST handwritten number database and passing in larger matrices, all of the elements in the output array approached zero. I suspect that this has to do with the dot product of the input array and the weights returning an array filled with large numbers, but my attempts to scale down these number have caused the outputs to approach 1. Would anybody be able to figure out what is going wrong? My neural network has only one hidden layer with 300 neurons. The code snippet below shows the methods for my neural network, because I believe that that is where I am going wrong, but if you want to see my entire messy, undocumented program, it can be found here. I am unfamiliar with the math library that I am using, which means that I made some of my own methods to go along with the math methods.
multMatrices(a, b) returns the product of matrices a and b.
math.multiply(a, b) returns the dot product of the two matrices.
math.add(a, b) and math.subtract(a, b) perform matrix addition and subtraction, and
transposeMatrix(a) returns the transpose of matrix a.
setAll(a, b) performs an operation on every element of matrix a, be it plugging the element into the sigmoid function (1 / (1 + a^-e)) in the case of "sigmoid" or the sigmoid derivative function (a * (1-a)) in the case of "sigmoidDerivitive", or setting it equal to a random value between 0 and 0.05 in the case of "randomlow".
I found that setting weights to a value between 0 and 1 kept the loss at 0.9, so I set them now using "randomlow".
function NeuralNetwork(x, y){
//Initializing the neural network
this.input = x;
this.y = y;
this.sizes = [this.input._size[1], 300, this.y._size[1]];
this.layers = this.sizes.length - 1;
this.lyrs = [this.input];
this.weights = [];
this.dweights = [];
for(var i = 0; i < this.layers; i ++){
this.weights.push(new math.matrix());
this.weights[i].resize([this.sizes[i], this.sizes[i + 1]]);
this.weights[i] = setAll(this.weights[i], "randomlow");
}
this.output = new math.matrix();
this.output.resize(this.y._size);
};
NeuralNetwork.prototype.set = function(x, y){
//I train the network by looping through values from the database and passing them into this function
this.input = x;
this.lyrs = [this.input];
this.y = y;
};
NeuralNetwork.prototype.feedforward = function(){
//Looping through the layers and multiplying them by their respective weights
for(var i = 0; i < this.weights.length; i ++){
this.lyrs[i + 1] = math.multiply(this.lyrs[i], this.weights[i]);
this.lyrs[i + 1] = setAll(this.lyrs[i + 1], "sigmoid");
}
this.output = this.lyrs[this.lyrs.length - 1];
};
NeuralNetwork.prototype.backpropogate = function(){
//Backpropogating the network. I don't fully understand this part
this.antis = [
function(a, b, c){
return(
math.multiply(transposeMatrix(a[0]), multMatrices(math.multiply(multMatrices(math.multiply(math.subtract(b.y, b.output), 2), setAll(b.output, "sigmoidDerivitive")), transposeMatrix(c)), setAll(a[1], "sigmoidDerivitive")))
);
},
function(a, b, c){
return(
math.multiply(transposeMatrix(a[0]), multMatrices(math.multiply(math.subtract(b.y, b.output), 2), setAll(b.output, "sigmoidDerivitive")))
);
}];
this.input = [];
this.weightInput = 0;
for(var i = this.weights.length - 1; i >= 0; --i){
this.input.unshift(this.lyrs[i]);
this.weightInput = (i === this.weights.length - 1 ? 0 : this.weights[i + 1]);
this.dweights[i] = this.antis[i](this.input, this, this.weightInput);
}
for(var i = 0; i < this.dweights.length; i ++){
this.weights[i] = math.add(this.weights[i], this.dweights[i]);
}
};
As always, I appreciate any time spent trying to solve my problem. If my code is unintelligible, don't bother with it. JavaScript probably isn't the best language for this purpose, but I didn't want to follow a tutorial with the same language.
EDIT: This is a potential duplicate of this post, which was answered. If anybody is facing this problem, they should see if the answer over there is of help. As of now I have not tested it with my program.

how to optimize a JavaScript random normal generation algo

Here is my code. This function returns a number that follows the standard normal distribution.
var rnorm = function() {
var n = 1000;
var x = 0;
var ss = 0;
for (var i = 0; i < n; i++) {
var a = Math.random();
x += a;
ss += Math.pow(a-0.5, 2);
}
var xbar = x/n;
var v = ss/n;
var sd = Math.sqrt(v);
return (xbar-0.5)/(sd/Math.sqrt(n));
};
It is simple and exploits the central limit theorem. Here is a jsfiddle running this thing 100,000 times and printing some info to the console.
https://jsfiddle.net/qx61fqpn/
It looks about right (I haven't written code to draw a histogram yet). The right proportions appear between (-1, 1), (-2, 2), (-3, 3) and beyond. The sample mean is 0, var & sd are 1, skew is 0 and kurtosis is 3.
But it's kind of slow.
I intend to write more complex functions using this one as a building block (like the chi-sq distribution).
My question is:
How can I rewrite this code to improve its performance? I'm talking speed. Right now, it's written to be clear to someone with a bit of JavaScript and statistics knowledge what I'm doing (or trying to do - in the event I've done it wrong).

Calculate maximum available rows and columns to fill with N amount of items

By reviewing this and this, I've come up with a function, that's probably more complex than it should be, but, man, my math sux:
function tablize(elements)
{
var root = Math.floor(Math.sqrt(elements));
var factors = [];
for (var i = 1; i <= root; i++)
{
if (elements % i === 0)
{
factors.push([i, elements / i]);
}
}
var smallest = null;
for (var f = 0; f < factors.length; f++)
{
var factor = factors[f];
var current = Math.abs(factor[0] - factor[1]);
if (!smallest || factors[smallest] > factor)
{
smallest = f;
}
}
return factors[smallest];
}
While this does work, it provides results I'm not satisfied with.
For instance - 7, it's divided in 1x7, where I'd like it to be 3x3. That's the minimum, optimal, grid size needed to fill with 7 elements.
Also - 3, it's divided in 1x3, where I'd like it to be 2x2.
I need this for a live camera feed frame distribution on a monitor, but I'm totally lost. The only way I can think of is building an extra function to feed with previously generated number and divide again, but that seems wrong.
What is the optimal solution to solve this?
For squares:
function squareNeeded(num) {
return Math.ceil(Math.sqrt(num));
}
http://jsfiddle.net/aKNVq/
(I think you mean the smallest square of a whole number that is bigger than the given amount, because if you meant a rectangle, then your example for seven would be 2*4 instead of 3*3.)

JavaScript Noise Function Problems

I've been trying to learn about generating noise and find that I understand most of it but I'm having a bit of trouble with a script.
I used this page as a guide to write this script in JavaScript with the ultimate purpose of creating some noise on canvas.
It's definitely creating something but it's tucked all the way over on the left. Also, refreshing the page seems to create the same pattern over and over again.
What have I done wrong that the "noisy" part of the image is smushed on the left? How can I make it look more like the cloudy perlin noise?
I don't really understand why it doesn't produce a new pattern each time. What would I need to change in order to receive a random pattern each time the script is run?
Thank you for your help!
/* NOISE—Tie it all together
*/
function perlin2d(x,y){
var total = 0;
var p = persistence;
var n = octaves - 1;
for(var i = 0; i <= n; i++) {
var frequency = Math.pow(2, i);
var amplitude = Math.pow(p, i);
total = total + interpolatenoise(x * frequency, y * frequency) * amplitude;
}
return total;
}
I've forked your fiddle and fixed a couple things to make it work: http://jsfiddle.net/KkDVr/2/
The main problem was the flawed pseudorandom generator "noise", that always returned 1 for large enough values of x and y. I've replaced it with a random values table that is queried with integer coordinates:
var values = [];
for(var i = 0; i < height; i++) {
values[i] = [];
for(var j = 0; j < width; j++) {
values[i][j] = Math.random() * 2 - 1;
}
}
function noise(x, y) {
x = parseInt(Math.min(width - 1, Math.max(0, x)));
y = parseInt(Math.min(height - 1, Math.max(0, y)));
return values[x][y];
}
However, the implementation provided in the tutorial you followed uses simplified algorithms that are really poorly optimized. I suggest you the excellent real-world noise tutorial at http://scratchapixel.com/lessons/3d-advanced-lessons/noise-part-1.
Finally, maybe you could be interested in a project of mine: http://lencinhaus.github.com/canvas-noise.
It's a javascript app that renders perlin noise on an html5 canvas and allows to tweak almost any parameter visually. I've ported the original noise algorithm implementation by Ken Perlin to javascript, so that may be useful for you. You can find the source code here: https://github.com/lencinhaus/canvas-noise/tree/gh-pages.
Hope that helps, bye!

Categories

Resources