I've been trying to learn about generating noise and find that I understand most of it but I'm having a bit of trouble with a script.
I used this page as a guide to write this script in JavaScript with the ultimate purpose of creating some noise on canvas.
It's definitely creating something but it's tucked all the way over on the left. Also, refreshing the page seems to create the same pattern over and over again.
What have I done wrong that the "noisy" part of the image is smushed on the left? How can I make it look more like the cloudy perlin noise?
I don't really understand why it doesn't produce a new pattern each time. What would I need to change in order to receive a random pattern each time the script is run?
Thank you for your help!
/* NOISE—Tie it all together
*/
function perlin2d(x,y){
var total = 0;
var p = persistence;
var n = octaves - 1;
for(var i = 0; i <= n; i++) {
var frequency = Math.pow(2, i);
var amplitude = Math.pow(p, i);
total = total + interpolatenoise(x * frequency, y * frequency) * amplitude;
}
return total;
}
I've forked your fiddle and fixed a couple things to make it work: http://jsfiddle.net/KkDVr/2/
The main problem was the flawed pseudorandom generator "noise", that always returned 1 for large enough values of x and y. I've replaced it with a random values table that is queried with integer coordinates:
var values = [];
for(var i = 0; i < height; i++) {
values[i] = [];
for(var j = 0; j < width; j++) {
values[i][j] = Math.random() * 2 - 1;
}
}
function noise(x, y) {
x = parseInt(Math.min(width - 1, Math.max(0, x)));
y = parseInt(Math.min(height - 1, Math.max(0, y)));
return values[x][y];
}
However, the implementation provided in the tutorial you followed uses simplified algorithms that are really poorly optimized. I suggest you the excellent real-world noise tutorial at http://scratchapixel.com/lessons/3d-advanced-lessons/noise-part-1.
Finally, maybe you could be interested in a project of mine: http://lencinhaus.github.com/canvas-noise.
It's a javascript app that renders perlin noise on an html5 canvas and allows to tweak almost any parameter visually. I've ported the original noise algorithm implementation by Ken Perlin to javascript, so that may be useful for you. You can find the source code here: https://github.com/lencinhaus/canvas-noise/tree/gh-pages.
Hope that helps, bye!
Related
I am trying to create a simple feed-forward neural network in JavaScript using a tutorial found here. I believe that I followed the tutorial correctly, as when I trained it with an input matrix of [[0,0,1],[0,1,1],[1,0,1],[1,1,1]] and a solution matrix of [[0],[1],[1],[0]] the network performed as expected. However, when I tried to train the network using the MNIST handwritten number database and passing in larger matrices, all of the elements in the output array approached zero. I suspect that this has to do with the dot product of the input array and the weights returning an array filled with large numbers, but my attempts to scale down these number have caused the outputs to approach 1. Would anybody be able to figure out what is going wrong? My neural network has only one hidden layer with 300 neurons. The code snippet below shows the methods for my neural network, because I believe that that is where I am going wrong, but if you want to see my entire messy, undocumented program, it can be found here. I am unfamiliar with the math library that I am using, which means that I made some of my own methods to go along with the math methods.
multMatrices(a, b) returns the product of matrices a and b.
math.multiply(a, b) returns the dot product of the two matrices.
math.add(a, b) and math.subtract(a, b) perform matrix addition and subtraction, and
transposeMatrix(a) returns the transpose of matrix a.
setAll(a, b) performs an operation on every element of matrix a, be it plugging the element into the sigmoid function (1 / (1 + a^-e)) in the case of "sigmoid" or the sigmoid derivative function (a * (1-a)) in the case of "sigmoidDerivitive", or setting it equal to a random value between 0 and 0.05 in the case of "randomlow".
I found that setting weights to a value between 0 and 1 kept the loss at 0.9, so I set them now using "randomlow".
function NeuralNetwork(x, y){
//Initializing the neural network
this.input = x;
this.y = y;
this.sizes = [this.input._size[1], 300, this.y._size[1]];
this.layers = this.sizes.length - 1;
this.lyrs = [this.input];
this.weights = [];
this.dweights = [];
for(var i = 0; i < this.layers; i ++){
this.weights.push(new math.matrix());
this.weights[i].resize([this.sizes[i], this.sizes[i + 1]]);
this.weights[i] = setAll(this.weights[i], "randomlow");
}
this.output = new math.matrix();
this.output.resize(this.y._size);
};
NeuralNetwork.prototype.set = function(x, y){
//I train the network by looping through values from the database and passing them into this function
this.input = x;
this.lyrs = [this.input];
this.y = y;
};
NeuralNetwork.prototype.feedforward = function(){
//Looping through the layers and multiplying them by their respective weights
for(var i = 0; i < this.weights.length; i ++){
this.lyrs[i + 1] = math.multiply(this.lyrs[i], this.weights[i]);
this.lyrs[i + 1] = setAll(this.lyrs[i + 1], "sigmoid");
}
this.output = this.lyrs[this.lyrs.length - 1];
};
NeuralNetwork.prototype.backpropogate = function(){
//Backpropogating the network. I don't fully understand this part
this.antis = [
function(a, b, c){
return(
math.multiply(transposeMatrix(a[0]), multMatrices(math.multiply(multMatrices(math.multiply(math.subtract(b.y, b.output), 2), setAll(b.output, "sigmoidDerivitive")), transposeMatrix(c)), setAll(a[1], "sigmoidDerivitive")))
);
},
function(a, b, c){
return(
math.multiply(transposeMatrix(a[0]), multMatrices(math.multiply(math.subtract(b.y, b.output), 2), setAll(b.output, "sigmoidDerivitive")))
);
}];
this.input = [];
this.weightInput = 0;
for(var i = this.weights.length - 1; i >= 0; --i){
this.input.unshift(this.lyrs[i]);
this.weightInput = (i === this.weights.length - 1 ? 0 : this.weights[i + 1]);
this.dweights[i] = this.antis[i](this.input, this, this.weightInput);
}
for(var i = 0; i < this.dweights.length; i ++){
this.weights[i] = math.add(this.weights[i], this.dweights[i]);
}
};
As always, I appreciate any time spent trying to solve my problem. If my code is unintelligible, don't bother with it. JavaScript probably isn't the best language for this purpose, but I didn't want to follow a tutorial with the same language.
EDIT: This is a potential duplicate of this post, which was answered. If anybody is facing this problem, they should see if the answer over there is of help. As of now I have not tested it with my program.
I'm currently writing a paper on how to build a simple neural network made of an input, an hidden layer and an output. While writing the part on the backpropagation, I was thinking of an easier algorithm to explain how backpropagation works (the paper is intended for non-specialist). So I read this paper : https://natureofcode.com/book/chapter-10-neural-networks/
And I find this formula to update weights :
NEW WEIGHT = WEIGHT + ERROR * INPUT * LEARNING CONSTANT
I decided to give it a try by updating "input-to-hidden" and "hidden-to-output" weights with the same learning constant. It quickly fails. Then, I decided to apply a different learning constant for the input layer's weights and the hidden layer's weights.
for (var i = 0; i < Input.length; i++) {
for (var a = 0; a < HiddenWeights.length; a++) { // learning constant one
HiddenWeights[a] = HiddenWeights[a] + TargetCalculated * Input[i] * 0.01
}
}
for (var i = 0; i < Input.length; i++) {
for (var b = 0; b < InputWeights.length; b++) { // learning constant two
InputWeights[b] = InputWeights[b] + TargetCalculated * Input[i] * 0.001
}
}
With this method to update the weights with separate learning constants my neural network seems to be able to properly backpropagate. My question is : how can I explain this?
Thank you,
Here is my code. This function returns a number that follows the standard normal distribution.
var rnorm = function() {
var n = 1000;
var x = 0;
var ss = 0;
for (var i = 0; i < n; i++) {
var a = Math.random();
x += a;
ss += Math.pow(a-0.5, 2);
}
var xbar = x/n;
var v = ss/n;
var sd = Math.sqrt(v);
return (xbar-0.5)/(sd/Math.sqrt(n));
};
It is simple and exploits the central limit theorem. Here is a jsfiddle running this thing 100,000 times and printing some info to the console.
https://jsfiddle.net/qx61fqpn/
It looks about right (I haven't written code to draw a histogram yet). The right proportions appear between (-1, 1), (-2, 2), (-3, 3) and beyond. The sample mean is 0, var & sd are 1, skew is 0 and kurtosis is 3.
But it's kind of slow.
I intend to write more complex functions using this one as a building block (like the chi-sq distribution).
My question is:
How can I rewrite this code to improve its performance? I'm talking speed. Right now, it's written to be clear to someone with a bit of JavaScript and statistics knowledge what I'm doing (or trying to do - in the event I've done it wrong).
I've spent the last 2 days watching youtube videos on neural networks.
In particular, I've been trying to implement a genetic algorithm that will evolve over time, however, most videos seem to be focused on neural networks that are trained, and then used for classification.
Being confused, I decided to simply try to implement the basic structure of the network, and have coded this - in JS, for convenience.
function sigmoid (x) { return 1 / (1 + Math.E ** -x); }
function Brain(inputs, hiddens, outputs) {
this.weights = {
hidden: [],
output: []
};
for (var i = hiddens; i--;) {
this.weights.hidden[i] = [];
for (var w = inputs; w--;) this.weights.hidden[i].push(Math.random());
}
for (var i = outputs; i--;) {
this.weights.output[i] = [];
for (var w = hiddens; w--;) this.weights.output[i].push(Math.random());
}
}
Brain.prototype.compute = function(inputs) {
var hiddenInputs = [];
for (var i = this.weights.hidden.length; i--;) {
var dot = 0;
for (var w = inputs.length; w--;) dot += inputs[w] * this.weights.hidden[i][w];
hiddenInputs[i] = sigmoid(dot);
}
var outputs = [];
for (var i = this.weights.output.length; i--;) {
var dot = 0;
for (var w = this.weights.hidden.length; w--;) dot += hiddenInputs[w] * this.weights.output[i][w];
outputs[i] = sigmoid(dot);
}
return outputs;
}
var brain = new Brain(1,2,1);
brain.compute([1]);
I successfully get values between 0 and 1. And, when I use specific weights, I get the same value each time, for a constant input.
Is the terminology I'm using in code good?
I fear I may simply be observing false positives, and am not actually feeding forward.
Is the sigmoid function appropriately? Should I be using it for a genetic / evolving algorithm?
I noticed that I'm getting results only between 0.5 and 1;
To combine a neural network with a genetic algorithm your best shot is probably NEAT. There is a very good implementation of this algorithm in JS called 'Neataptic', you should be able to fint it on github.
When combining GA with ANN you generally want to not only adjust the weights, but the structure as well.
Sigmoid activation is OK for GA, but in many cases you also want other activation functions, you can find a small list of activation functions on wikipedia or create your own ones.
By reviewing this and this, I've come up with a function, that's probably more complex than it should be, but, man, my math sux:
function tablize(elements)
{
var root = Math.floor(Math.sqrt(elements));
var factors = [];
for (var i = 1; i <= root; i++)
{
if (elements % i === 0)
{
factors.push([i, elements / i]);
}
}
var smallest = null;
for (var f = 0; f < factors.length; f++)
{
var factor = factors[f];
var current = Math.abs(factor[0] - factor[1]);
if (!smallest || factors[smallest] > factor)
{
smallest = f;
}
}
return factors[smallest];
}
While this does work, it provides results I'm not satisfied with.
For instance - 7, it's divided in 1x7, where I'd like it to be 3x3. That's the minimum, optimal, grid size needed to fill with 7 elements.
Also - 3, it's divided in 1x3, where I'd like it to be 2x2.
I need this for a live camera feed frame distribution on a monitor, but I'm totally lost. The only way I can think of is building an extra function to feed with previously generated number and divide again, but that seems wrong.
What is the optimal solution to solve this?
For squares:
function squareNeeded(num) {
return Math.ceil(Math.sqrt(num));
}
http://jsfiddle.net/aKNVq/
(I think you mean the smallest square of a whole number that is bigger than the given amount, because if you meant a rectangle, then your example for seven would be 2*4 instead of 3*3.)