I am not happy with how this code draws this music visualizer using canvas and getByteFrequencyData. https://share.getcloudapp.com/Kou7AJb1
It seems the bars are too large and I think its because the FFT (Fast Fourier Transform) array contains a wide spectrum of data but in my code I am generating n amount of bars based on the width of the canvas.
Then after having n bars I am mapping the FFT to the same index of the bar leaving out lots of useful information.
function convertRange(value: any, r1: any, r2: any) {
return ((value - r1[0]) * (r2[1] - r2[0])) / (r1[1] - r1[0]) + r2[0];
}
// line up and down
const drawVisualizer2 = ({ canvas, frameData, background }: any) => {
const ctx = canvas.getContext("2d");
ctx.clearRect(0, 0, canvas.width, canvas.height);
// TODO: Improve?
const bars = Math.round(canvas.width) / 15 - 1;
const max_of_array = Math.max.apply(Math, frameData.fft);
for (let i = 0; i < bars; i++) {
const height = convertRange(frameData.fft[i], [0, max_of_array], [0, canvas.height / 2 - 20]);
const centerY = canvas.height / 2;
// draw the bar
ctx.strokeStyle = background ? background.colors[0] : "#ffffff";
ctx.lineWidth = 10;
ctx.lineCap = "round";
ctx.beginPath();
ctx.moveTo((i + 1) * 15, centerY);
ctx.lineTo((i + 1) * 15, centerY + height);
ctx.stroke();
ctx.beginPath();
ctx.moveTo((i + 1) * 15, centerY);
ctx.lineTo((i + 1) * 15, centerY - height);
ctx.stroke();
}
};
export default drawVisualizer2;
What I think needs to be done is average out the FFT based on the amount of bars in the loop. If that makes sense what is a practical approach code wise to achieve that?
I hope this makes sense, happy to clarify if needed.
I assume that frameData.fft is the Uint8Array containing the actual frequency data returned by the AnalyserNode.getByteFrequencyData() method.
Your assumption is right - the number of bars of course doesn't match the number of items stored in array and with a loop like
for (let i = 0; i < bars; i++) {
...
frameData.fft[i]
...
}
you're just using the first few values from zero up to the number of bars and ultimately skipping the entire rest of the array.
The fix is quite simple though:
Instead of grabbing values from the array in intervals of 1, the interval must be the number of elements in the array divided by the number of bars. This number is then multiplied by the variable i inside the for-loop and rounded as the division might result in a decimal number and the array's elements are at integer positions.
Here's an example:
let frameData = {
fft: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
};
let bars = 5;
let steps = frameData.fft.length / bars;
for (let i = 0; i < bars; i++) {
console.log(frameData.fft[Math.round(i * steps)]);
}
Related
I want to draw StackOverflow's logo with this Neural Network:
The NN should ideally become [r, g, b] = f([x, y]). In other words, it should return RGB colors for a given pair of coordinates. The FFNN works pretty well for simple shapes like a circle or a box. For example after several thousands epochs a circle looks like this:
Try it yourself: https://codepen.io/adelriosantiago/pen/PoNGeLw
However since StackOverflow's logo is far more complex even after several thousands of iterations the FFNN's results are somewhat poor:
From left to right:
StackOverflow's logo at 256 colors.
With 15 hidden neurons: The left handle never appears.
50 hidden neurons: Pretty poor result in general.
0.03 as learning rate: Shows blue in the results (blue is not in the orignal image)
A time-decreasing learning rate: The left handle appears but other details are now lost.
Try it yourself: https://codepen.io/adelriosantiago/pen/xxVEjeJ
Some parameters of interest are synaptic.Architect.Perceptron definition and learningRate value.
How can I improve the accuracy of this NN?
Could you improve the snippet? If so, please explain what you did. If there is a better NN architecture to tackle this type of job could you please provide an example?
Additional info:
Artificial Neural Network library used: Synaptic.js
To run this example in your localhost: See repository
By adding another layer, you get better results :
let perceptron = new synaptic.Architect.Perceptron(2, 15, 10, 3)
There are small improvements that you can do to improve efficiency (marginally):
Here is my optimized code:
const width = 125
const height = 125
const outputCtx = document.getElementById("output").getContext("2d")
const iterationLabel = document.getElementById("iteration")
const stopAtIteration = 3000
let perceptron = new synaptic.Architect.Perceptron(2, 15, 10, 3)
let iteration = 0
let inputData = (() => {
const tempCtx = document.createElement("canvas").getContext("2d")
tempCtx.drawImage(document.getElementById("input"), 0, 0)
return tempCtx.getImageData(0, 0, width, height)
})()
const getRGB = (img, x, y) => {
var k = (height * y + x) * 4;
return [
img.data[k] / 255, // R
img.data[k + 1] / 255, // G
img.data[k + 2] / 255, // B
//img.data[(height * y + x) * 4 + 3], // Alpha not used
]
}
const paint = () => {
var imageData = outputCtx.getImageData(0, 0, width, height)
for (let x = 0; x < width; x++) {
for (let y = 0; y < height; y++) {
var rgb = perceptron.activate([x / width, y / height])
var k = (height * y + x) * 4;
imageData.data[k] = rgb[0] * 255
imageData.data[k + 1] = rgb[1] * 255
imageData.data[k + 2] = rgb[2] * 255
imageData.data[k + 3] = 255 // Alpha not used
}
}
outputCtx.putImageData(imageData, 0, 0)
setTimeout(train, 0)
}
const train = () => {
iterationLabel.innerHTML = ++iteration
if (iteration > stopAtIteration) return
let learningRate = 0.01 / (1 + 0.0005 * iteration) // Attempt with dynamic learning rate
//let learningRate = 0.01 // Attempt with non-dynamic learning rate
for (let x = 0; x < width; x += 1) {
for (let y = 0; y < height; y += 1) {
perceptron.activate([x / width, y / height])
perceptron.propagate(learningRate, getRGB(inputData, x, y))
}
}
paint()
}
const startTraining = (btn) => {
btn.disabled = true
train()
}
EDIT : I made another CodePen with even better results:
https://codepen.io/xurei/pen/KKzWLxg
It is likely to be over-fitted BTW.
The perceptron definition:
let perceptron = new synaptic.Architect.Perceptron(2, 8, 15, 7, 3)
Taking some insights from the lecture/slides of Bhiksha Raj (from slides 62 onwards), and summarizing as below:
Each node can be assumed like a linear classifier, and combination of several nodes in a single layer of neural networks can approximate any basic shapes. For example, a rectangle can be formed by 4 nodes for each lines, assuming each nodes contributes to one line, and the shape can be approximated by the final output layer.
Falling back to the summary of complex shapes such as circle, it may require infinite nodes in a layer. Or this would likely hold true for a single layer with two disjoint shapes (A non-overlapping triangle and rectangle). However, this can still be learnt using more than 1 hidden layers. Where, the 1st layer learns the basic shapes, followed by 2nd layer approximating their disjoint combinations.
Thus, you can assume that this logo is combination of disjoint rectangles (5 rectangles for orange and 3 rectangles for grey). We can use atleast 32 nodes in 1st hidden layer and few nodes in the 2nd hidden layer. However, we don't have control over what each node learns. Hence, a few more number of neurons than required neurons should be helpful.
I have struggled to make my a moving graph in P5. My idea is to make a graph by the length of an array. If the length of the array increases the graph will move up and if it increases it will do the opposite.
My code can be seen at this link, where I have made a comment, where the graph is.
This example from #solub really highlights this issue, and presents a decent solution to the problem.
const W = 680, H = 200; // dimensions of canvas
const time = 400; // number of x tick values
const step = W/time; // time step
let data = []; // to store number of infected people
let count = 0; // steps counter
let pos, fy, c, infected, colors, l, f;
function setup() {
createCanvas(W, H);
fill(255, 30, 70, 90);
// array containing the x positions of the line graph, scaled to fit the canvas
posx = Float32Array.from({ length: time }, (_, i) => map(i, 0, time, 0, W));
// function to map the number of infected people to a specific height (here the height of the canvas)
fy = _ => map(_, 3, 0, H, 10);
// colors based on height stored in an array list.
colors = d3.range(H).map(i => d3.interpolateWarm(norm(i, 0, H)))
}
function draw() {
background('#fff');
// length of data list -1 (to access last item of data list)
l = data.length -1 ;
// frameCount
f = frameCount;
// number of infected people (noised gaussian curved)
c = sin(f*0.008);
infected = (exp(-c*c/2.0) / sqrt(TWO_PI) / 0.2) + map(noise(f*0.02), 0, 1, -1, 1);
// store that number at each step (the x-axis tick values)
if (f&step) {
data.push(infected);
count += 1;
}
// iterate over data list to rebuild curve at each frame
for (let i = 0; i < l; i++) {
y1 = fy(data[i]);
y2 = fy(data[i+1]);
x1 = posx[i];
x2 = posx[i+1];
// vertical lines (x-values)
strokeWeight(0.2);
line(x1, H, x1, y1 + 2);
// polyline
strokeWeight(2);
stroke(colors[Math.floor(map(y1, H, 10, H, 0))] );
line(x1, y1, x2, y2);
}
// draw ellispe at last data point
if (count > 1) {
ellipse(posx[l], fy(data[l]), 4, 4);
}
// reset data and count
if (count%time===0) {
data = [];
count = 0;
}
}
For reference, I'm talking about the dark-gray space in the upper left of Discord's Login Page. For anyone who can't access that link, here's a screenshot:
It has a number of effects that are really cool, the dots and (darker shadows) move with the mouse, but I'm more interested in the "wobbly edge" effect, and to a lesser extent the "fast wobble/scale in" on page load (scaling in the canvas on load would give a similar, if not "cheaper" effect).
Unfortunately, I can't produce much in the way of a MCVE, because I'm not really sure where to start. I tried digging through Discord's assets, but I'm not familiar enough to Webpack to be able to determine what's going on.
Everything I've been able to dig up on "animated wave/wobble" is CSS powered SVG or clip-path borders, I'd like to produce something a bit more organic.
Very interesting problem. I've scaled the blob down so it is visible in the preview below.
Here is a codepen as well at a larger size.
const SCALE = 0.25;
const TWO_PI = Math.PI * 2;
const HALF_PI = Math.PI / 2;
const canvas = document.createElement("canvas");
const c = canvas.getContext("2d");
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
document.body.appendChild(canvas);
class Blob {
constructor() {
this.wobbleIncrement = 0;
// use this to change the size of the blob
this.radius = 500;
// think of this as detail level
// number of conections in the `bezierSkin`
this.segments = 12;
this.step = HALF_PI / this.segments;
this.anchors = [];
this.radii = [];
this.thetaOff = [];
const bumpRadius = 100;
const halfBumpRadius = bumpRadius / 2;
for (let i = 0; i < this.segments + 2; i++) {
this.anchors.push(0, 0);
this.radii.push(Math.random() * bumpRadius - halfBumpRadius);
this.thetaOff.push(Math.random() * TWO_PI);
}
this.theta = 0;
this.thetaRamp = 0;
this.thetaRampDest = 12;
this.rampDamp = 25;
}
update() {
this.thetaRamp += (this.thetaRampDest - this.thetaRamp) / this.rampDamp;
this.theta += 0.03;
this.anchors = [0, this.radius];
for (let i = 0; i <= this.segments + 2; i++) {
const sine = Math.sin(this.thetaOff[i] + this.theta + this.thetaRamp);
const rad = this.radius + this.radii[i] * sine;
const theta = this.step * i;
const x = rad * Math.sin(theta);
const y = rad * Math.cos(theta);
this.anchors.push(x, y);
}
c.save();
c.translate(-10, -10);
c.scale(SCALE, SCALE);
c.fillStyle = "blue";
c.beginPath();
c.moveTo(0, 0);
bezierSkin(this.anchors, false);
c.lineTo(0, 0);
c.fill();
c.restore();
}
}
const blob = new Blob();
function loop() {
c.clearRect(0, 0, canvas.width, canvas.height);
blob.update();
window.requestAnimationFrame(loop);
}
loop();
// array of xy coords, closed boolean
function bezierSkin(bez, closed = true) {
const avg = calcAvgs(bez);
const leng = bez.length;
if (closed) {
c.moveTo(avg[0], avg[1]);
for (let i = 2; i < leng; i += 2) {
let n = i + 1;
c.quadraticCurveTo(bez[i], bez[n], avg[i], avg[n]);
}
c.quadraticCurveTo(bez[0], bez[1], avg[0], avg[1]);
} else {
c.moveTo(bez[0], bez[1]);
c.lineTo(avg[0], avg[1]);
for (let i = 2; i < leng - 2; i += 2) {
let n = i + 1;
c.quadraticCurveTo(bez[i], bez[n], avg[i], avg[n]);
}
c.lineTo(bez[leng - 2], bez[leng - 1]);
}
}
// create anchor points by averaging the control points
function calcAvgs(p) {
const avg = [];
const leng = p.length;
let prev;
for (let i = 2; i < leng; i++) {
prev = i - 2;
avg.push((p[prev] + p[i]) / 2);
}
// close
avg.push((p[0] + p[leng - 2]) / 2, (p[1] + p[leng - 1]) / 2);
return avg;
}
There are lots of things going on here. In order to create this effect you need a good working knowledge of how quadratic bezier curves are defined. Once you have that, there is an old trick that I've used many many times over the years. To generate smooth linked quadratic bezier curves, define a list of points and calculate their averages. Then use the points as control points and the new averaged points as anchor points. See the bezierSkin and calcAvgs functions.
With the ability to draw smooth bezier curves, the rest is about positioning the points in an arc and then animating them. For this we use a little math:
x = radius * sin(theta)
y = radius * cos(theta)
That converts polar to cartesian coordinates. Where theta is the angle on the circumference of a circle [0 - 2pi].
As for the animation, there is a good deal more going on here - I'll see if I have some more time this weekend to update the answer with more details and info, but hopefully this will be helpful.
The animation runs on a canvas and it is a simple bezier curve animation.
For organic feel, you should look at perlin noise, that was introduced when developing original Tron video FX.
You can find a good guide to understand perlin noise here.
In the example I've used https://github.com/josephg/noisejs
var c = $('canvas').get(0).getContext('2d');
var simplex = new SimplexNoise();
var t = 0;
function init() {
window.requestAnimationFrame(draw);
}
function draw() {
c.clearRect(0, 0, 600, 300);
c.strokeStyle="blue";
c.moveTo(100,100);
c.lineTo(300,100);
c.stroke();
// Draw a Bézier curve by using the same line cooridinates.
c.beginPath();
c.lineWidth="3";
c.strokeStyle="black";
c.moveTo(100,100);
c.bezierCurveTo((simplex.noise2D(t,t)+1)*200,(simplex.noise2D(t,t)+1)*200,(simplex.noise2D(t,t)+1)*200,0,300,100);
c.stroke();
// draw reference points
c.fillRect(100-5,100-5,10,10);
c.fillRect(200-5,200-5,10,10);
c.fillRect(200-5,0-5,10,10);
c.fillRect(300-5,100-5,10,10);
t+=0.001;
window.requestAnimationFrame(draw);
}
init();
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/simplex-noise/2.4.0/simplex-noise.js"></script>
<canvas width="600" height="300"></canvas>
Note: further investigation on Discord source code, I've pointed out that's is using https://www.npm.red/~epistemex libraries. Epistemex NPM packages are still online, while GitHub repos and profile does not exists anymore.
Note 2: Another approach could be relying on physics libraries like this demo, but it can be an overkill, if you just need a single effect.
I am attempting to build a simple HTML5 canvas based image processor that takes an image and generates a tiled version of it with each tile being the average color of the underlying image area.
This is easy enough to do outside the context of a Web Worker but I'd like to use a worker so as not to block the ui processing thread. The Uint8ClampedArray form the data takes is giving me a headache with regards to how to process it tile by tile.
Below is a plunk demonstrating what I've done so far and how it's not working.
http://plnkr.co/edit/AiHmLM1lyJGztk8GHrso?p=preview
The relevant code is in worker.js
Here it is:
onmessage = function (e) {
var i,
j = 0,
k = 0,
data = e.data,
imageData = data.imageData,
tileWidth = Math.floor(data.tileWidth),
tileHeight = Math.floor(data.tileHeight),
width = imageData.width,
height = imageData.height,
tile = [],
len = imageData.data.length,
offset,
processedData = [],
tempData = [],
timesLooped = 0,
tileIncremented = 1;
function sampleTileData(tileData) {
var blockSize = 20, // only visit every x pixels
rgb = {r:0,g:0,b:0},
i = -4,
count = 0,
length = tileData.length;
while ((i += blockSize * 4) < length) {
if (tileData[i].r !== 0 && tileData[i].g !== 0 && tileData[i].b !== 0) {
++count;
rgb.r += tileData[i].r;
rgb.g += tileData[i].g;
rgb.b += tileData[i].b;
}
}
// ~~ used to floor values
rgb.r = ~~(rgb.r/count);
rgb.g = ~~(rgb.g/count);
rgb.b = ~~(rgb.b/count);
processedData.push(rgb);
}
top:
for (; j <= len; j += (width * 4) - (tileWidth * 4), timesLooped++) {
if (k === (tileWidth * 4) * tileHeight) {
k = 0;
offset = timesLooped - 1 < tileHeight ? 4 : 0;
j = ((tileWidth * 4) * tileIncremented) - offset;
timesLooped = 0;
tileIncremented++;
sampleTileData(tempData);
tempData = [];
//console.log('continue "top" loop for new tile');
continue top;
}
for (i = 0; i < tileWidth * 4; i++) {
k++;
tempData.push({r: imageData.data[j+i], g: imageData.data[j+i+1], b: imageData.data[j+i+2], a: imageData.data[j+i+3]});
}
//console.log('continue "top" loop for new row per tile');
}
postMessage(processedData);
};
I'm sure there's a better way of accomplishing what I'm trying to do starting at the labeled for loop. So any alternative methods or suggestions would be much appreciated.
Update:
I've taken a different approach to solving this:
http://jsfiddle.net/TunMn/425/
Close, but no.
I know what the problem is but I have no idea how to go about amending it. Again, any help would be much appreciated.
Approach 1: Manually calculating average per tile
Here is one approach you can try:
There is only need for reading, update can be done later using HW acceleration
Use async calls for every row (or tile if the image is very wide)
This gives an accurate result but is slower and depends on CORS restrictions.
Example
You can see the original image for a blink below. This shows the asynchronous approach works as it allows the UI to update while processing the tiles in chunks.
window.onload = function() {
var img = document.querySelector("img"),
canvas = document.querySelector("canvas"),
ctx = canvas.getContext("2d"),
w = img.naturalWidth, h = img.naturalHeight,
// store average tile colors here:
tileColors = [];
// draw in image
canvas.width = w; canvas.height = h;
ctx.drawImage(img, 0, 0);
// MAIN CALL: calculate, when done the callback function will be invoked
avgTiles(function() {console.log("done!")});
// The tiling function
function avgTiles(callback) {
var cols = 8, // number of tiles (make sure it produce integer value
rows = 8, // for tw/th below:)
tw = (w / cols)|0, // pixel width/height of each tile
th = (h / rows)|0,
x = 0, y = 0;
(function process() { // for async processing
var data, len, count, r, g, b, i;
while(x < cols) { // get next tile on x axis
r = g = b = i = 0;
data = ctx.getImageData(x * tw, y * th, tw, th).data; // single tile
len = data.length;
count = len / 4;
while(i < len) { // calc this tile's color average
r += data[i++]; // add values for each component
g += data[i++];
b += data[i++];
i++
}
// store average color to array, no need to write back at this point
tileColors.push({
r: (r / count)|0,
g: (g / count)|0,
b: (b / count)|0
});
x++; // next tile
}
y++; // next row, but do an async break below:
if (y < rows) {
x = 0;
setTimeout(process, 9); // call it async to allow browser UI to update
}
else {
// draw tiles with average colors, fillRect is faster than setting each pixel:
for(y = 0; y < rows; y++) {
for(x = 0; x < cols; x++) {
var col = tileColors[y * cols + x]; // get stored color
ctx.fillStyle = "rgb(" + col.r + "," + col.g + "," + col.b + ")";
ctx.fillRect(x * tw, y * th, tw, th);
}
}
// we're done, invoke callback
callback()
}
})(); // to self-invoke process()
}
};
<canvas></canvas>
<img src="http://i.imgur.com/X7ZrRkn.png" crossOrigin="anonymous">
Approach 2: Letting the browser do the job
We can also let the browser do the whole job exploiting interpolation and sampling.
When the browser scales an image down it will calculate the average for each new pixel. If we then turn off linear interpolation when we scale up we will get each of those average pixels as square blocks:
Scale down image at a ratio producing number of tiles as number of pixels
Turn off image smoothing
Scale the small image back up to the desired size
This will be many times faster than the first approach, and you will be able to use CORS-restricted images. Just note it may not be as accurate as the first approach, however, it is possible to increase the accuracy by scaling down the image in several step, each half the size.
Example
window.onload = function() {
var img = document.querySelector("img"),
canvas = document.querySelector("canvas"),
ctx = canvas.getContext("2d"),
w = img.naturalWidth, h = img.naturalHeight;
// draw in image
canvas.width = w; canvas.height = h;
// scale down image so number of pixels represent number of tiles,
// here use two steps so we get a more accurate result:
ctx.drawImage(img, 0, 0, w, h, 0, 0, w*0.5, h*0.5); // 50%
ctx.drawImage(canvas, 0, 0, w*0.5, h*0.5, 0, 0, 8, 8); // 8 tiles
// turn off image-smoothing
ctx.imageSmoothingEnabled =
ctx.msImageSmoothingEnabled =
ctx.mozImageSmoothingEnabled =
ctx.webkitImageSmoothingEnabled = false;
// scale image back up
ctx.drawImage(canvas, 0, 0, 8, 8, 0, 0, w, h);
};
<canvas></canvas>
<img src="http://i.imgur.com/X7ZrRkn.png" crossOrigin="anonymous">
I was wondering how to go about making tetris pieces fall, I have followed a few tutorials and I have made the complete game, but I am a little stumped on how they actually got the pieces to fall, and how they made the 2d arrays into actual blocks, can someone guide me in the right direction here? I am just trying to learn the process better, this was all done in a canvas.
for example, here is the lpiece
function LPiece(){
//the pieces are represented in arrays shaped like the pieces, for example, here is an L.
//each state is a form the piece takes on, so each state after this.state1 is this.state1 rotated.
//each state is individual so we can call out which state to use depending on the option selected.
this.state1 = [ [1, 0],
[1, 0],
[1, 1]];
//and heres a different piece
this.state2 = [ [0, 0, 1],
[1, 1, 1]];
this.state3 = [
[1,1],
[0,1],
[0,1]];
this.state4 = [
[1, 1, 1],
[1, 0, 0]];
//and now we tie them all to one
this.states = [this.state1, this.state2, this.state3, this.state4];
//reference to the state the piece is currently in.
this.curState = 0;
//color of piece
this.color = 0;
//tracking pieces of grid of x and y coords, this is set at 4, -3 so it isn't initially visable.
this.gridx = 4;
this.gridy = -3;
}
piece.color = Math.floor(Math.random() *8);
I added comments to try to make me understand initially
and here is the image they used for each block, each block was one color
http://i.imgur.com/Mh5jMox.png
so how would he translate the array to be an actual block, and then get that to fall from the board, I have searched endlessly, I am just confused, like how he set the gridx and gridy to the x and y coordinates without ever saying this.y = gridy or something like that? does anyone have any suggestions on what to do here? thanks
here is how he drew the piece i guess, I still don't understand how he linked the x and y to the gridx and y of the piece without actually saying the x and y is grid x and y.
function drawPiece(p){
//connecting the y and x coords or pieces to draw using the arrays to pieces we defined earlier.
var drawX = p.gridx;
var drawY = p.gridy;
var state = p.curState;
//looping through to get a pieces currentstate, and drawing it to the board.
//rows, the p.state is deciding the length by the currentstate(the block width)
for(var r = 0, len = p.states[state].length; r < len; r++){
//columns the p.state is deciding the length by the currentstate(the block width)
for(var c = 0, len2 = p.states[state][r].length; c < len2; c++){
//detecting if there is a block to draw depending on size of value returned.
if(p.states[state][r][c] == 1 && drawY >= 0){
ctx.drawImage(blockImg, p.color * SIZE, 0, SIZE, SIZE, drawX * SIZE, drawY * SIZE, SIZE, SIZE);
}
drawX += 1;
}
//resetting the gridx
drawX = p.gridx;
//incrementing gridy to get the second layer of arrays from the block states.
drawY += 1;
}
}
He converts to canvas pixel units in ctx.drawImage here is a simplified version
var canvasX = drawX * SIZE;
var canvasY = drawY * SIZE;
ctx.drawImage(blockImg, p.color * SIZE, 0, SIZE, SIZE, canvasX, canvasY, SIZE, SIZE);
https://developer.mozilla.org/en/docs/Web/API/CanvasRenderingContext2D#drawImage()