reshape 1d array to 3d for tensorflow.js lstm - javascript

I'm trying to predict future stock prices using time series data. I have an xs array of 251 timesteps, and a ys array of the corresponding stock price for that time step. I have reshaped the xs array to be 3d, but get an error
'Input Tensors should have the same number of samples as target Tensors. Found 1 input sample(s) and 251 target sample(s).'
the code for the model is below.
var xs = [];
var ys = [];
for(i in result){
xs.push(i);
ys.push(result[i].close);
}
var xt = tf.tensor3d(xs, [1,xs.length,1]);
var yt = tf.tensor2d(ys, [xs.length, 1]);
//xt.reshape([1, xs.length, 1]).print();
//yt.reshape([1, ys.length, 1]).print();
var lstm1 = tf.layers.lstm({units: 32, returnSequences: true, inputShape:[xs.length,1]});
var model = tf.sequential();
model.add(lstm1);
model.add(tf.layers.dropout({rate:0.2}));
model.add(tf.layers.lstm({units:5}));
model.add(tf.layers.dropout({rate:0.2}));
model.add(tf.layers.dense({units:1, inputShape:[32], activation:'softmax'}));
model.compile({optimizer:'adam', loss:'categoricalCrossentropy'});
model.fit(xt, yt, {epochs:1000}).then(() => {
bestfit = model.predict(tf.tensor(xs, [xs.length,1])).dataSync();

The error seems to come from model.fit(x, y) because there seems to be a mismatch in the shape of x and y.
x has the shape [1, 251, 1] and y the shape [251, 1]. This does not work because there is more features in x than there is label in y. You have to reshape whether x or y.
reshape x: x.reshape([251, 1, 1]) or x.reshape([251, 1])
or
reshape y: y.reshape([1, 251]) or y.reshape([1, 251, 1])
Note: There is almost an infinite way of reshaping that will work as long as the first two dimension sizes are equal and that the product of all dimension sizes is equal to 251. What matter in the reshaping is not to loose the correlation between the features and the labels

Related

how to process a convolution for sum of absolute values for a image in Google earth engine

I want to do the following convolution operations on the image: the sum of the absolute values of the difference between each value in the calculation window and the value of the central pixel.
How could I do that in Google earth engine Javascript API?
Thanks a lot !
I believe you can use the .neighborhoodToBands() method on an image to get what you want. I learned this from the Texture page on the Google Earth Engine API tutorial.
First, load an image and select a single band:
// Load a high-resolution NAIP image.
var image = ee.Image('USDA/NAIP/DOQQ/m_3712213_sw_10_1_20140613');
// Zoom to San Francisco, display.
Map.setCenter(-122.466123, 37.769833, 17);
Map.addLayer(image, {max: 255}, 'image');
// Get the NIR band.
var nir = image.select('N');
Then create a kernel:
// Create a list of weights for a 9x9 kernel.
var list = [1, 1, 1, 1, 1, 1, 1, 1, 1];
// The center of the kernel is zero.
var centerList = [1, 1, 1, 1, 0, 1, 1, 1, 1];
// Assemble a list of lists: the 9x9 kernel weights as a 2-D matrix.
var lists = [list, list, list, list, centerList, list, list, list, list];
// Create the kernel from the weights.
// Non-zero weights represent the spatial neighborhood.
var kernel = ee.Kernel.fixed(9, 9, lists, -4, -4, false);
Alternative way to create a kernel... This is a function that I've been using to create a kernel with a 0 weight at the center and with a user-defined radius:
var create_kernel = function(pixel_radius) {
var pixel_diameter = 2 * pixel_radius + 1;
var weight_val = 1;
var weights = ee.List.repeat(ee.List.repeat(weight_val, pixel_diameter), pixel_diameter);
var mid_row = ee.List.repeat(weight_val, pixel_radius)
.cat([0])
.cat(ee.List.repeat(weight_val, pixel_radius));
weights = weights.set(pixel_radius, mid_row);
var kernel = ee.Kernel.fixed({
height: pixel_diameter,
width: pixel_diameter,
weights: weights
});
return kernel;
};
var kernel = create_kernel(4);
Convert the neighborhood to bands and then perform the calculation:
// Convert the neighborhood into multiple bands.
var neighs = nir.neighborhoodToBands(kernel);
print(neighs);
// Compute convolution; Focal pixel (represented by original nir image)
// Subtract away the 80 bands representing the 80 neighbors
// (9x9 neighborhood = 81 pixels - 1 focal pixel = 80 neighbors)
var convolution = nir.subtract(neighs).abs().reduce(ee.Reducer.sum());
print(convolution);
Map.addLayer(convolution,
{min: 20, max: 2500, palette: ['0000CC', 'CC0000']},
"Convolution");
Does this get you what you want?
Here's the link to the GEE code:

Use negative x or y values in Plotly surface data?

Plotly surface data is arranged as a 2d Array (a matrix) whose indices correspond to x and y values and whose elements indicate the z values. E.g. if element [0][0] equals 10, that indicates an (x,y,z) coordinate of (0,0,10).
The problem: Because Array indices start at zero, it seems impossible to graph surfaces that have negative x or y values.
Here is a CodePen with three surfaces plotted. The surfaces look fine because only two octants are shown in the graph - the (+x,+y) quadrant. If all octants are displayed in the graph (CodePen), then it ends up looking incomplete because the plotted surfaces won't extend into the remaining 3 quadrants.
The general form of surface data is:
{ z : dataArray,
type : 'surface',
opacity : 0.9 }
Is there a way to give the surface data an xstart or ystart, or the like, so that full 3d surfaces can be drawn?
You can add your own x and y coordinates, Plotly just assumes them if you do not provide them.
From the documentation:
x (data array)
Sets the x coordinates.
y (data array)
Sets the y coordinates.
Code is based on the example here.
z = [[8.83,8.89,8.81,8.87,8.9,8.87],
[8.89,8.94,8.85,8.94,8.96,8.92],
[8.84,8.9,8.82,8.92,8.93,8.91],
[8.79,8.85,8.79,8.9,8.94,8.92],
[8.79,8.88,8.81,8.9,8.95,8.92],
[8.8,8.82,8.78,8.91,8.94,8.92],
[8.75,8.78,8.77,8.91,8.95,8.92],
[8.8,8.8,8.77,8.91,8.95,8.94],
[8.74,8.81,8.76,8.93,8.98,8.99],
[8.89,8.99,8.92,9.1,9.13,9.11],
[8.97,8.97,8.91,9.09,9.11,9.11],
[9.04,9.08,9.05,9.25,9.28,9.27],
[9,9.01,9,9.2,9.23,9.2],
[8.99,8.99,8.98,9.18,9.2,9.19],
[8.93,8.97,8.97,9.18,9.2,9.18]];
var x = [];
var y = [];
for (var i = 0; i < z.length; i += 1) {
x[i] = [];
y[i] = [];
for (var j = 0; j < z[i].length; j += 1) {
x[i].push(j + i - 10);
y[i].push(j - 3);
}
}
Plotly.newPlot('myDiv', [{z: z,
x: x,
y: y,
type: 'surface'}]);
<script src="https://cdn.plot.ly/plotly-latest.min.js"></script>
<div id="myDiv"></div>

How to write jqPlot values into variables?

I use jqPlot to plot a line chart out of a .csv-file.
I need to get the xmax and ymax values of the plot and use them for further processings.
How do I get this or any other values and write them inside my own variables?
EDIT
Let's say this is my plot:
What I need is not the maximum x-value from the array (here 1380). I need to get the maximum value from the plot (here 2000). For further processing I would like to add rectangles inside the plot, see second picture: and calculate their height as a x-value and not as their pixel-value.
Therefore I need to access the jqplot variables, not the array variables I give over to jqplot.
So, at some point you have an array of values that you passed to jqPlot to draw the graph, for example:
var myData = [[1, 2],[3,5.12],[5,13.1],[7,33.6],[9,85.9],[11,219.9]];
If you want to find the maximum x and y values, you just need to loop through the array keeping track of the largest value you've found so far.
var maxX, maxY;
for (var i=0; i < myData.length; i++) {
if (myData[i][0] > maxX || !maxX) {
maxX = myData[i][0];
}
if (myData[i][1] > maxY || !maxY) {
maxY = myData[i][1];
}
}
Here's a simple demo: http://jsfiddle.net/LAbvj/
EDIT: Ok, so I think what you are now asking for is the maximum for each axis. In that case, this is simple:
var plot1 = $.jqplot('chart1', [
[3, 7, 19, 1, 4, 6, 8, 2, 5]
]);
console.log(plot1.axes.xaxis.max);
console.log(plot1.axes.yaxis.max);
See demo: http://jsfiddle.net/KJTRF/

Tiled based pathfinding in a 1- or 2-dimensional array

As far as I know all tile based map editors export a JSON object containing one dimensional arrays. While most pathfinding libraries/tutorials are only provided for two dimensional arrays.
Also if for example I would like to do pathfinding in this one dimensional array and this array is huge i'm geussing this would cause performance issues.
So why is it that most tile based map editors output a one dimensional and how should I handle those regarding pathfinding?
example tile editor
Just google pathfinding to find all the two dimensional patfhfinding tutorials
Depending on the orientation by which it is converted to the 1-D array;
function convert(x, y, height, width) {
return x + y * width; // rows
/* return y + x * height; // cols */
}
function reverse(i, height, width) {
var x, y;
// rows
x = i % width
y = (i - x) / width
/* // cols
y = i % height;
x = (i - y) % height; */
return [x, y];
}
Now, say we have a 6 wide by 3 high map
1-D | 2-D
0 1 2 3 4 5 | x0y0 x1y0 x2y0 x3y0 x4y0 x5y0
6 7 8 9 10 11 | x0y1 x1y1 x2y1 x3y1 x4y1 x5y1
12 13 14 15 16 17 | x0y2 x1y2 x2y2 x3y2 x4y2 x5y2
Pick an index in the 1-D Array, e.g. i = 8, to convert it to it's 2-D co-ordinates we can use reverse
reverse(8, 3, 6); // [2, 1]
// i, h, w = [x, y]
Or say we picked co-ordinates x = 2, y = 1 in our 2-D Array, we can convert it to the index in the 1-D Array with convert
convert(2, 1, 3, 6); // 8
// x, y, h, w = i
Once you can convert between the two systems you can do your path finding as normal. You can name these functions however you like, I wrote them more so you can see how to switch between the two systems.
Depending on how it is made, the y axis may have 0 at the bottom, not the top, or the entire thing could be mirrored across a diagonal (which I called cols in the above functions). It really depends on how it was done, but as long as you are consistent with the conversion and have the correct height and width (read maximum y and maximum x respectively), it should not matter.
One approach might be to retrieve an offset into the 1D array, based on the 2D vector coords of the tile:
int MaxX = 100; // assumes a max row of 100;
int offset = Y * MaxX + X;
tile[offset] = ....
No need to convert, just reference the tile directly in the 1D array. I used this approach for A* in a recent game project, and works for me.

Iterate each point within a polygon

Assuming I have the vertices of some polygon in a grid environment, how can I iterate through each cell it contains (including those on the edge)?
To clarify, I have the following vertices (counted as if the topleft is (0, 0)):
//each point is [x, y]
var verts = [
[1, 1],
[3, 1],
[3, 2],
[4, 2],
[4, 4],
[0, 4],
[0, 3],
[1, 3]
];
Which would define a polygon such as this:
Where each green dot is a point I would like to iterate, based on the vertices above. There is no pattern to the direction the vertices will walk along the edge of the polygon, it could go clockwise or counter-clockwise around the polygon. However, they will be in order; that is, if you put down your pen and move to each vertex in order, without lifting up, and it would draw the outline without crossing inside the polygon.
The use case being I have the imageData from a PNG loaded via the canvas API. This PNG is split into "zones", and I need to iterate each pixel of the current "zone". Each "zone" is defined by a vertex array like above.
I tried something like the following, which will create a square to iterate through for each set of 4 vertices in the array.
for(var v = 0, vl = verts.length - 4; v < vl; ++v) {
//grabbing the minimum X, Y and maximum X, Y to define a square to iterate in
var minX = Math.min(verts[v][0], verts[v + 1][0], verts[v + 2][0], verts[v + 3][0]),
minY = Math.min(verts[v][1], verts[v + 1][1], verts[v + 2][1], verts[v + 3][1]),
maxX = Math.max(verts[v][0], verts[v + 1][0], verts[v + 2][0], verts[v + 3][0]),
maxY = Math.min(verts[v][1], verts[v + 1][1], verts[v + 2][1], verts[v + 3][1]);
for(var x = minX; x < maxX; ++x) {
for(var y = minY; y < maxY; ++y) {
//do my checks on this pixel located at X, Y in the PNG
}
}
}
Two big problems with that though:
It can repeat points within the polygon, and
It can grab points outside the polygon
I can solve the first issue by tracking which points I check, so I don't repeat a check. The second can only be solved by running a PointInPoly check on each point, which would make this solution much heavier than I want it to be.
EDIT
Iterating each pixel in the entire image and applying a PointInPoly check to each is also unacceptable; it would be even slower than the above algorithm.
If your polygons are convex, you can do the following:
Create a line for each edge of the polygon denoting one side inside and one side outside (this is based on the normal, which can be dependent on the winding direction)
For every pixel inside the bounding box that you already calculated, check to see if the pixel is on the in-side of the line. If the pixel is on the out-side of any of the lines, then it is outside the polygon. If it is inside all of them, then it is inside.
The basic algorithm is from here: https://github.com/thegrandpoobah/voronoi/blob/master/stippler/stippler.cpp#L187
If your polygons are not convex, then what I would do is to actually draw the polygon on the canvas in a known colour, and then apply the iterative flood fill algorithm. That requires that you know at least one pixel which is on the inside, but that shouldn't be an expensive test. But this may not be suitable in JavaScript if you can't do it in an offscreen buffer (not familiar with the canvas tag).

Categories

Resources