Currently I have circles drawn on a map and I set the radius doing:
radius: density * (1 - 0.65)
That works somehow as it avoids getting too big since that density value could be huge. Yet everything changes when we have little density value as the circles gets too small.
UPDATE
One idea could be, since these circles are a bunch of data on the map but we also have the total density called globalDensity which gives us the sum of all densities, maybe there is a way to get a % of that?
UPDATE TWO
Actually that idea is not possible since the total is calculated after all circles have been placed on the map.
UPDATE THREE
We have plenty of different circles on a map with dynamic density values, these various from a min of zero to a max of X number
When I do radius: density * (1 - 0.65) works well when sizing the big values but doesn't work when we have let's say, 1 as density
value
At some point after the loop, I calculate also ALL densities but unfortunately that happens after the loop e.g. after I push all densities in an array and I do the sum, so it's too late to use that as a maxValue to do the calculation.
A comment below the question has suggested to use a minValue and a maxValue which makes sense but I'm not sure how to then calculate the % based on that, I tried the following but I'm doing it wrong as the result is a Huge circle.
var minRadius = 1000;
var maxRadius = density;
var percentage = maxRadius * (1 - 0.40);
var radius = minRadius + percentage * (maxRadius -minRadius).toFixed(0);
Looping over data is fast, don't worry about it.
One may think that with big data it would be better to avoid looping twice, but actually given how JIT nowadays does an awesome job at optimizing, having two loops performing a single action is often faster than having a single one doing two things.
So simply do a first pass over your data to get the extents:
const data = [ ...
const extents = data.reduce( (extents, datum) => {
const value = datum.value;
extents.min = Math.min( extents.min, value );
extents.max = Math.max( extents.max, value );
return extents;
}, { min: Infinity, max: -Infinity } );
Now, you need to define the minRadius and maxRadius values to the visual extents you want. (I guess it will be in pixels).
const minRadius = 5; // circles can't be smaller than that
const maxRadius = 30; // circles can't be bigger than that
Finally you can set your circles radii based on these extents:
const valDistance = extents.max - extents.min;
const radiusDistance = maxRadius - minRadius;
data.forEach( datum => {
datum.radius = (datum.value - extents.min) / valDistance * radiusDistance + minRadius;
} );
const data = Array.from({ length: 20 }, () => ({
value: Math.random() * 300 - 150
}));
const extents = data.reduce((extents, datum) => {
const value = datum.value;
extents.min = Math.min(extents.min, value);
extents.max = Math.max(extents.max, value);
return extents;
}, { min: Infinity, max: -Infinity });
const minRadius = 5; // circles can't be smaller than that
const maxRadius = 30; // circles can't be bigger than that
const valDistance = extents.max - extents.min;
const radiusDistance = maxRadius - minRadius;
data.forEach(datum => {
datum.radius = (datum.value - extents.min) / valDistance * radiusDistance + minRadius;
});
console.log(data);
Related
I've been working on a game which requires thousands of very small images (20^20 px) to be rendered and rotated each frame. A sample snippet is provided.
I've used every trick I know to speed it up to increase frame rates but I suspect there are other things I can do to optimise this.
Current optimisations include:
Replacing save/restore with explicit transformations
Avoiding scale/size-transformations
Being explicit about destination sizes rather than letting the browser guess
requestAnimationFrame rather than set-interval
Tried but not present in example:
Rendering objects in batches to other offscreen canvases then compiling later (reduced performance)
Avoiding floating point locations (required due to placement precision)
Not using alpha on main canvas (not shown in snippet due to SO snippet rendering)
//initial canvas and context
var canvas = document.getElementById('canvas');
canvas.width = 800;
canvas.height = 800;
var ctx = canvas.getContext('2d');
//create an image (I) to render
let myImage = new OffscreenCanvas(10,10);
let myImageCtx = myImage.getContext('2d');
myImageCtx.fillRect(0,2.5,10,5);
myImageCtx.fillRect(0,0,2.5,10);
myImageCtx.fillRect(7.5,0,2.5,10);
//animation
let animation = requestAnimationFrame(frame);
//fill an initial array of [n] object positions and angles
let myObjects = [];
for (let i = 0; i <1500; i++){
myObjects.push({
x : Math.floor(Math.random() * 800),
y : Math.floor(Math.random() * 800),
angle : Math.floor(Math.random() * 360),
});
}
//render a specific frame
function frame(){
ctx.clearRect(0,0,canvas.width, canvas.height);
//draw each object and update its position
for (let i = 0, l = myObjects.length; i<l;i++){
drawImageNoReset(ctx, myImage, myObjects[i].x, myObjects[i].y, myObjects[i].angle);
myObjects[i].x += 1; if (myObjects[i].x > 800) {myObjects[i].x = 0}
myObjects[i].y += .5; if (myObjects[i].y > 800) {myObjects[i].y = 0}
myObjects[i].angle += .01; if (myObjects[i].angle > 360) {myObjects[i].angle = 0}
}
//reset the transform and call next frame
ctx.setTransform(1, 0, 0, 1, 0, 0);
requestAnimationFrame(frame);
}
//fastest transform draw method - no transform reset
function drawImageNoReset(myCtx, image, x, y, rotation) {
myCtx.setTransform(1, 0, 0, 1, x, y);
myCtx.rotate(rotation);
myCtx.drawImage(image, 0,0,image.width, image.height,-image.width / 2, -image.height / 2, image.width, image.height);
}
<canvas name = "canvas" id = "canvas"></canvas>
You are very close to the max throughput using the 2D API and a single thread, however there are some minor points that can improve performance.
WebGL2
First though, if you are after the best performance possible using javascript you must use WebGL
With WebGL2 you can draw 8 or more times as many 2D sprites than with the 2D API and have a larger range of FX (eg color, shadow, bump, single call smart tile maps...)
WebGL is VERY worth the effort
Performance related points
globalAlpha is applied every drawImage call, values other than 1 do not affect performance.
Avoid the call to rotate The two math calls (including a scale) are a tiny bit quicker than the rotate. eg ax = Math..cos(rot) * scale; ay = Math.sin(rot) * scale; ctx.setTransform(ax,ay,-ay,ax,x,y)
Rather than use many images, put all the images in a single image (sprite sheet). Not applicable in this case
Don`t litter the global scope. Keep object close as possible to functions scope and pass object by reference. Access to global scoped variable is MUCH slower the local scoped variables.
Best to use modules as they hove their own local scope
Use radians. Converting angles to deg and back is a waste of processing time. Learn to use radians Math.PI * 2 === 360 Math.PI === 180 and so on
For positive integers don't use Math.floor use a bit-wise operator as they automatically convert Doubles to Int32 eg Math.floor(Math.random() * 800) is faster as Math.random() * 800 | 0 ( | is OR )
Be aware of the Number type in use. Converting to an integer will cost cycles if every time you use it you convert it back to double.
Always Pre-calculate when ever possible. Eg each time you render an image you negate and divide both the width and height. These values can be pre calculated.
Avoid array lookup (indexing). Indexing an object in an array is slower than direct reference. Eg the main loop indexes myObject 11 times. Use a for of loop so there is only one array lookup per iteration and the counter is a more performant internal counter. (See example)
Though there is a performance penalty for this, if you separate update and render loops on slower rendering devices you will gain performance, by updating game state twice for every rendered frame. eg Slow render device drops to 30FPS and game slows to half speed, if you detect this update state twice, and render once. The game will still present at 30FPS but still play and normal speed (and may even save the occasional drooped frame as you have halved the rendering load)
Do not be tempted to use delta time, there are some negative performance overheads (Forces doubles for many values that can be Ints) and will actually reduce animation quality.
When ever possible avoid conditional branching, or use the more performant alternatives. EG in your example you loop object across boundaries using if statements. This can be done using the remainder operator % (see example)
You check rotation > 360. This is not needed as rotation is cyclic A value of 360 is the same as 44444160. (Math.PI * 2 is same rotation as Math.PI * 246912)
Non performance point.
Each animation call you are preparing a frame for the next (upcoming) display refresh. In your code you are displaying the game state then updating. That means your game state is one frame ahead of what the client sees. Always update state, then display.
Example
This example has added some additional load to the objects
can got in any direction
have individual speeds and rotations
don`t blink in and out at edges.
The example includes a utility that attempts to balance the frame rate by varying the number of objects.
Every 15 frames the (work) load is updated. Eventually it will reach a stable rate.
DON`T NOT gauge the performance by running this snippet, SO snippets sits under all the code that runs the page, the code is also modified and monitored (to protect against infinite loops). The code you see is not the code that runs in the snippet. Just moving the mouse can cause dozens of dropped frames in the SO snippet
For accurate results copy the code and run it alone on a page (remove any extensions that may be on the browser while testing)
Use this or similar to regularly test your code and help you gain experience in knowing what is good and bad for performance.
Meaning of rate text.
1 +/- Number Objects added or removed for next period
2 Total number of objects rendered per frame during previous period
3 Number Running mean of render time in ms (this is not frame rate)
4 Number FPS is best mean frame rate.
5 Number Frames dropped during period. A dropped frame is the length of the reported frame rate. I.E. "30fps 5dropped" the five drop frames are at 30fps, the total time of dropped frames is 5 * (1000 / 30)
const IMAGE_SIZE = 10;
const IMAGE_DIAGONAL = (IMAGE_SIZE ** 2 * 2) ** 0.5 / 2;
const DISPLAY_WIDTH = 800;
const DISPLAY_HEIGHT = 800;
const DISPLAY_OFFSET_WIDTH = DISPLAY_WIDTH + IMAGE_DIAGONAL * 2;
const DISPLAY_OFFSET_HEIGHT = DISPLAY_HEIGHT + IMAGE_DIAGONAL * 2;
const PERFORMANCE_SAMPLE_INTERVAL = 15; // rendered frames
const INIT_OBJ_COUNT = 500;
const MAX_CPU_COST = 8; // in ms
const MAX_ADD_OBJ = 10;
const MAX_REMOVE_OBJ = 5;
canvas.width = DISPLAY_WIDTH;
canvas.height = DISPLAY_HEIGHT;
requestAnimationFrame(start);
function createImage() {
const image = new OffscreenCanvas(IMAGE_SIZE,IMAGE_SIZE);
const ctx = image.getContext('2d');
ctx.fillRect(0, IMAGE_SIZE / 4, IMAGE_SIZE, IMAGE_SIZE / 2);
ctx.fillRect(0, 0, IMAGE_SIZE / 4, IMAGE_SIZE);
ctx.fillRect(IMAGE_SIZE * (3/4), 0, IMAGE_SIZE / 4, IMAGE_SIZE);
image.neg_half_width = -IMAGE_SIZE / 2; // snake case to ensure future proof (no name clash)
image.neg_half_height = -IMAGE_SIZE / 2; // use of Image API
return image;
}
function createObject() {
return {
x : Math.random() * DISPLAY_WIDTH,
y : Math.random() * DISPLAY_HEIGHT,
r : Math.random() * Math.PI * 2,
dx: (Math.random() - 0.5) * 2,
dy: (Math.random() - 0.5) * 2,
dr: (Math.random() - 0.5) * 0.1,
};
}
function createObjects() {
const objects = [];
var i = INIT_OBJ_COUNT;
while (i--) { objects.push(createObject()) }
return objects;
}
function update(objects){
for (const obj of objects) {
obj.x = ((obj.x + DISPLAY_OFFSET_WIDTH + obj.dx) % DISPLAY_OFFSET_WIDTH);
obj.y = ((obj.y + DISPLAY_OFFSET_HEIGHT + obj.dy) % DISPLAY_OFFSET_HEIGHT);
obj.r += obj.dr;
}
}
function render(ctx, img, objects){
for (const obj of objects) { drawImage(ctx, img, obj) }
}
function drawImage(ctx, image, {x, y, r}) {
const ax = Math.cos(r), ay = Math.sin(r);
ctx.setTransform(ax, ay, -ay, ax, x - IMAGE_DIAGONAL, y - IMAGE_DIAGONAL);
ctx.drawImage(image, image.neg_half_width, image.neg_half_height);
}
function timing(framesPerTick) { // creates a running mean frame time
const samples = [0,0,0,0,0,0,0,0,0,0];
const sCount = samples.length;
var samplePos = 0;
var now = performance.now();
const maxRate = framesPerTick * (1000 / 60);
const API = {
get FPS() {
var time = performance.now();
const FPS = 1000 / ((time - now) / framesPerTick);
const dropped = ((time - now) - maxRate) / (1000 / 60) | 0;
now = time;
if (FPS > 30) { return "60fps " + dropped + "dropped" };
if (FPS > 20) { return "30fps " + (dropped / 2 | 0) + "dropped" };
if (FPS > 15) { return "20fps " + (dropped / 3 | 0) + "dropped" };
if (FPS > 10) { return "15fps " + (dropped / 4 | 0) + "dropped" };
return "Too slow";
},
time(time) { samples[(samplePos++) % sCount] = time },
get mean() { return samples.reduce((total, val) => total += val, 0) / sCount },
};
return API;
}
function updateStats(CPUCost, objects) {
const fps = CPUCost.FPS;
const mean = CPUCost.mean;
const cost = mean / objects.length; // estimate per object CPU cost
const count = MAX_CPU_COST / cost | 0;
const objCount = objects.length;
var str = "0";
if (count < objects.length) {
var remove = Math.min(MAX_REMOVE_OBJ, objects.length - count);
str = "-" + remove;
objects.length -= remove;
} else if (count > objects.length + MAX_ADD_OBJ) {
let i = MAX_ADD_OBJ;
while (i--) {
objects.push(createObject());
}
str = "+" + MAX_ADD_OBJ;
}
info.textContent = str + ": " + objCount + " sprites " + mean.toFixed(3) + "ms " + fps;
}
function start() {
var frameCount = 0;
const CPUCost = timing(PERFORMANCE_SAMPLE_INTERVAL);
const ctx = canvas.getContext('2d');
const image = createImage();
const objects = createObjects();
function frame(time) {
frameCount ++;
const start = performance.now();
ctx.setTransform(1, 0, 0, 1, 0, 0);
ctx.clearRect(0, 0, DISPLAY_WIDTH, DISPLAY_WIDTH);
update(objects);
render(ctx, image, objects);
requestAnimationFrame(frame);
CPUCost.time(performance.now() - start);
if (frameCount % PERFORMANCE_SAMPLE_INTERVAL === 0) {
updateStats(CPUCost, objects);
}
}
requestAnimationFrame(frame);
}
#info {
position: absolute;
top: 10px;
left: 10px;
background: #DDD;
font-family: arial;
font-size: 18px;
}
<canvas name = "canvas" id = "canvas"></canvas>
<div id="info"></div>
I want to draw StackOverflow's logo with this Neural Network:
The NN should ideally become [r, g, b] = f([x, y]). In other words, it should return RGB colors for a given pair of coordinates. The FFNN works pretty well for simple shapes like a circle or a box. For example after several thousands epochs a circle looks like this:
Try it yourself: https://codepen.io/adelriosantiago/pen/PoNGeLw
However since StackOverflow's logo is far more complex even after several thousands of iterations the FFNN's results are somewhat poor:
From left to right:
StackOverflow's logo at 256 colors.
With 15 hidden neurons: The left handle never appears.
50 hidden neurons: Pretty poor result in general.
0.03 as learning rate: Shows blue in the results (blue is not in the orignal image)
A time-decreasing learning rate: The left handle appears but other details are now lost.
Try it yourself: https://codepen.io/adelriosantiago/pen/xxVEjeJ
Some parameters of interest are synaptic.Architect.Perceptron definition and learningRate value.
How can I improve the accuracy of this NN?
Could you improve the snippet? If so, please explain what you did. If there is a better NN architecture to tackle this type of job could you please provide an example?
Additional info:
Artificial Neural Network library used: Synaptic.js
To run this example in your localhost: See repository
By adding another layer, you get better results :
let perceptron = new synaptic.Architect.Perceptron(2, 15, 10, 3)
There are small improvements that you can do to improve efficiency (marginally):
Here is my optimized code:
const width = 125
const height = 125
const outputCtx = document.getElementById("output").getContext("2d")
const iterationLabel = document.getElementById("iteration")
const stopAtIteration = 3000
let perceptron = new synaptic.Architect.Perceptron(2, 15, 10, 3)
let iteration = 0
let inputData = (() => {
const tempCtx = document.createElement("canvas").getContext("2d")
tempCtx.drawImage(document.getElementById("input"), 0, 0)
return tempCtx.getImageData(0, 0, width, height)
})()
const getRGB = (img, x, y) => {
var k = (height * y + x) * 4;
return [
img.data[k] / 255, // R
img.data[k + 1] / 255, // G
img.data[k + 2] / 255, // B
//img.data[(height * y + x) * 4 + 3], // Alpha not used
]
}
const paint = () => {
var imageData = outputCtx.getImageData(0, 0, width, height)
for (let x = 0; x < width; x++) {
for (let y = 0; y < height; y++) {
var rgb = perceptron.activate([x / width, y / height])
var k = (height * y + x) * 4;
imageData.data[k] = rgb[0] * 255
imageData.data[k + 1] = rgb[1] * 255
imageData.data[k + 2] = rgb[2] * 255
imageData.data[k + 3] = 255 // Alpha not used
}
}
outputCtx.putImageData(imageData, 0, 0)
setTimeout(train, 0)
}
const train = () => {
iterationLabel.innerHTML = ++iteration
if (iteration > stopAtIteration) return
let learningRate = 0.01 / (1 + 0.0005 * iteration) // Attempt with dynamic learning rate
//let learningRate = 0.01 // Attempt with non-dynamic learning rate
for (let x = 0; x < width; x += 1) {
for (let y = 0; y < height; y += 1) {
perceptron.activate([x / width, y / height])
perceptron.propagate(learningRate, getRGB(inputData, x, y))
}
}
paint()
}
const startTraining = (btn) => {
btn.disabled = true
train()
}
EDIT : I made another CodePen with even better results:
https://codepen.io/xurei/pen/KKzWLxg
It is likely to be over-fitted BTW.
The perceptron definition:
let perceptron = new synaptic.Architect.Perceptron(2, 8, 15, 7, 3)
Taking some insights from the lecture/slides of Bhiksha Raj (from slides 62 onwards), and summarizing as below:
Each node can be assumed like a linear classifier, and combination of several nodes in a single layer of neural networks can approximate any basic shapes. For example, a rectangle can be formed by 4 nodes for each lines, assuming each nodes contributes to one line, and the shape can be approximated by the final output layer.
Falling back to the summary of complex shapes such as circle, it may require infinite nodes in a layer. Or this would likely hold true for a single layer with two disjoint shapes (A non-overlapping triangle and rectangle). However, this can still be learnt using more than 1 hidden layers. Where, the 1st layer learns the basic shapes, followed by 2nd layer approximating their disjoint combinations.
Thus, you can assume that this logo is combination of disjoint rectangles (5 rectangles for orange and 3 rectangles for grey). We can use atleast 32 nodes in 1st hidden layer and few nodes in the 2nd hidden layer. However, we don't have control over what each node learns. Hence, a few more number of neurons than required neurons should be helpful.
I'm working on an AE project where around 50 Emojis should have a drop shadow on the floor.To make things easier I tried to add an expression that auto shrinks and grows the shadows based on the distance of the emoji to the floor.
Here is what I've tried
Drop Shadow Approach
You can see that the shadow grows and shrinks but in the wrong direction. So when emoji comes closer to the floor it shrinks and when the distance is more it grows. I need the opposite of the current behavior.
How do I achieve that?
This is the expression I've used for the scale property of the shadow layer. Shadow layer is separate from the emoji layer. So I have a composition with only 2 layers.
var y = thisComp.layer("smile").position[1];
var dist = Math.sqrt( Math.pow((this.position[0]-this.position[0]), 2) + Math.pow((this.position[1]-y), 2) );
newValue = dist ;
xScale = newValue;
yScale = newValue;
[xScale,yScale]
Thanks for your time.
The basic concept here is mapping values from one range to another. You want to say that (for example) as the distance changes between 0 and 100, the scale should change proportionally between 1 and 0.
function map ( x, oldMin, oldMax, newMin, newMax ) {
return newMin + ( x - oldMin ) / ( oldMax - oldMin ) * ( newMax - newMin );
}
var minDistance = 0;
var maxDistance = 100;
var maxScale = 1;
var minScale = 0;
xScale = yScale = map( dist, minDistance, maxDistance, maxScale, minScale );
I've created a pinch filter/effect on canvas using the following algorithm:
// iterate pixels
for (var i = 0; i < originalPixels.data.length; i+= 4) {
// calculate a pixel's position, distance, and angle
var pixel = new Pixel(affectedPixels, i, origin);
// check if the pixel is in the effect area
if (pixel.dist < effectRadius) {
// initial method (flawed)
// iterate original pixels and calculate the new position of the current pixel in the affected pixels
if (method.value == "org2aff") {
var targetDist = ( pixel.dist - (1 - pixel.dist / effectRadius) * (effectStrength * effectRadius) ).clamp(0, effectRadius);
var targetPos = calcPos(origin, pixel.angle, targetDist);
setPixel(affectedPixels, targetPos.x, targetPos.y, getPixel(originalPixels, pixel.pos.x, pixel.pos.y));
} else {
// alternative method (better)
// iterate affected pixels and calculate the original position of the current pixel in the original pixels
var originalDist = (pixel.dist + (effectStrength * effectRadius)) / (1 + effectStrength);
var originalPos = calcPos(origin, pixel.angle, originalDist);
setPixel(affectedPixels, pixel.pos.x, pixel.pos.y, getPixel(originalPixels, originalPos.x, originalPos.y));
}
} else {
// copy unaffected pixels from original to new image
setPixel(affectedPixels, pixel.pos.x, pixel.pos.y, getPixel(originalPixels, pixel.pos.x, pixel.pos.y));
}
}
I've struggled a lot to get it to this point and I'm quite happy with the result. Nevertheless, I have a small problem; jagged pixels. Compare the JS pinch with Gimp's:
I don't know what I'm missing. Do I need to apply another filter after the actual filter? Or is my algorithm wrong altogether?
I can't add the full code here (as a SO snippet) because it contains 4 base64 images/textures (65k chars in total). Instead, here's a JSFiddle.
One way to clean up the result is supersampling. Here's a simple example: https://jsfiddle.net/Lawmo4q8/
Basically, instead of calculating a single value for a single pixel, you take multiple value samples within/around the pixel...
let color =
calcColor(x - 0.25, y - 0.25) + calcColor(x + 0.25, y - 0.25) +
calcColor(x - 0.25, y + 0.25) + calcColor(x + 0.25, y + 0.25);
...and merge the results in some way.
color /= 4;
How to do this? In highchart ..
horizontal bar with gradient.. red to gray for negative to positive values
If you use already a Highcharts module which uses a color axis, you can use its tweenColors function to find the intermediate color.
As was mentioned, you have to loop through the data and set the point's color. If you don't want to use any other module, you need to find intermediate colors - answer can be found here.
var data = [100, 50, 20, -10],
min = Math.min.apply(null, data),
max = Math.max.apply(null, data),
total = max - min,
colorMin = new Highcharts.Color('rgb(255, 0, 0)'),
colorMax = new Highcharts.Color('white'),
tweenColors = Highcharts.ColorAxis.prototype.tweenColors;
coloredData = data.map(value => {
var pos = (value - min) / total;
return {
y: value,
color: tweenColors(colorMin, colorMax, pos)
};
});
example: http://jsfiddle.net/oudktg0o/