Create an image in JavaScript's Web Worker (worker.js) - javascript

I'm rewritng a small javascript for being able to put it in a worker.js like it is documented here:
Mozilla - Web_Workers_API
The worker.js shall display an image on an OffscreenCanvas like it is documented here:
Mozilla - OfscreenCanvas documentation
The initial script is using the following statement that obviously cannot be used in a worker.js file, because there is no "document":
var imgElement = document.createElement("img");
imgElement.src = canvas.toDataURL("image/png");
But how can I substitue the
document.createElement("img");
statement in the worker.js for still being able to use the second statement:
imgElement.src = canvas.toDataURL("image/png");
If anyone has any idea, it would be really appreciated. :)

Just don't.
Instead of exporting the canvas content and make the browser decode that image only to display it, simply display the HTMLCanvasElement directly.
This advice already stood for before you switched to an OffscreenCanvas, but it still does.
Then how to draw on an OffscreenCanvas in a Worker and still display it? I hear you ask.
Well, you can request an OffscreenCanvas from an HTMLCanvasElement through its transferControlToOffscreen() method.
So the way to go is, in the UI thread, you genereate the <canvas> element that will be used for displaying the image, and you generate an OffscreenCanvas from it. Then you start your Worker to which you'll transfer the OffscreenCanvas.
In the Worker you'll wait for the OffscreenCanvas in the onmessage event and grab the context and draw on it.
UI thread
const canvas = document.createElement("canvas");
const offscreen = canvas.transferControlToOffscreen();
const worker = new Worker(url);
worker.postMessage(offscreen, [offscreen]);
container.append(canvas);
Worker thread
onmessage = (evt) => {
const canvas = evt.data;
const ctx = canvas.getContext(ctx_type);
//...
All the drawings made from the Worker will get painted on the visible canvas, without blocking the UI thread at all.
const canvas = document.querySelector("canvas");
const offscreen = canvas.transferControlToOffscreen();
const worker = new Worker(getWorkerURL());
worker.postMessage(offscreen, [offscreen]);
function getWorkerURL() {
const worker_script = `
onmessage = (evt) => {
const canvas = evt.data;
const w = canvas.width = 500;
const h = canvas.height = 500;
const ctx = canvas.getContext("2d");
// draw some noise
const img = new ImageData(w,h);
const arr = new Uint32Array(img.data.buffer);
for( let i=0; i<arr.length; i++ ) {
arr[i] = Math.random() * 0xFFFFFFFF;
}
ctx.putImageData(img, 0, 0);
for( let i = 0; i < 500; i++ ) {
ctx.arc( Math.random() * w, Math.random() * h, Math.random() * 20, 0, Math.PI*2 );
ctx.closePath();
}
ctx.globalCompositeOperation = "xor";
ctx.fill();
};
`;
const blob = new Blob( [ worker_script ] );
return URL.createObjectURL( blob );
}
canvas { border: 1px solid; }
<canvas></canvas>

Related

The canvas has been tainted by cross-origin data. local image

I have a local image and im trying to use getImageData() so I can loop through the pixels. But I keep getting this error.
"DOMException: Failed to execute 'getImageData' on 'CanvasRenderingContext2D': The canvas has been tainted by cross-origin data".
I have searched and can't find an answer that applies. any Help is appreciated. Again it's a local image.
document.addEventListener("DOMContentLoaded",(e)=>{
const canvas = document.querySelector('#canvas');
const ctx = canvas.getContext('2d');
canvas.width = 600;
canvas.height = 400;
let imgObj = new Image()// this will load the image so I can put it on the canvas
imgObj.onload = function(e){
let w = canvas.width;
let nw = canvas.naturalWidth;
let nh = canvas.height;
let aspect = nw / nh;
let h = w/aspect;
ctx.drawImage(imgObj,0,0,w,nh)
}
//My image is in a local file but I keep getting an error
//The canvas has been tainted by cross-origin data.
imgObj.src = 'img/Tooth1.png';
let imgData1;
const grayScale = function(ev){
try {
imgData1 = ctx.getImageData(0,0,canvas.width, canvas.height);
let arr = imgData1.data;// get the image raw data array
// set up loop to go through each pixel
for(let i = 0; i<arr.length; i=i+4){// inc by 4 eveytime Cause
there are 4 values
// red blue green if 4th it would be alfa
let ttl = arr[i] + arr[i+1] + arr[i+2];
let avg = parseInt(ttl/3);
// if i set all three values to the same it will be some color of
grey
arr[i] = avg;
arr[i+1] = avg;
arr[i+2] = avg;
}
} catch (error) {
console.log(error)
}
imgData.data = arr;
ctx.putImageData(imgData, 0,0);
}
canvas.addEventListener('click', grayScale);
});
I had to set up a server and have the image stored there. then i could run through local host and it worked.
I found this link that answers the in more detail.
"Cross origin requests are only supported for HTTP." error when loading a local file

Loading images before rendering JS canvas

I'm writing one of those simple games to learn JS and I'm learning HTML5 in the process so I need to draw things on canvas.
Here's the code:
let paddle = new Paddle(GAME_WIDTH,GAME_HEIGHT);
new InputHandler(paddle);
let lastTime = 0;
const ball = new Image();
ball.src = 'assets/ball.png';
function gameLoop(timeStamp){
let dt = timeStamp - lastTime;
lastTime = timeStamp;
ctx.clearRect(0,0,600,600);
paddle.update(dt);
paddle.draw(ctx);
ball.onload = () => {
ctx.drawImage(ball,20,20);
}
window.requestAnimationFrame(gameLoop);
}
gameLoop();
screenshot: no ball
before comment
now I comment out the clearRect():
after comment
hello ball.
There's also a paddle at the bottom of the canvas that doesn't seem to be affected by the clearRect() method. It works just fine. What am I missing here?
It doesn't make much sense to put the image's onload handler inside the game loop. This means the game has to begin running before the image's onload function is set, leading to a pretty confusing situation.
The correct sequence is to set the onload handlers, then the image sources, then await all of the image onloads firing before running the game loop. Setting the main loop to an onload directly is pretty easy when you only have one image, but for a game with multiple assets, this can get awkward quickly.
Here's a minimal example of how you might load many game assets using Promise.all. Very likely, you'll want to unpack the loaded images into more descriptive objects rather than an array, but this is a start.
const canvas = document.createElement("canvas");
document.body.appendChild(canvas);
canvas.width = 400;
canvas.height = 250;
const ctx = canvas.getContext("2d");
const assets = [
"http://placekitten.com/120/100",
"http://placekitten.com/120/120",
"http://placekitten.com/120/140",
];
const assetsLoaded = assets.map(url =>
new Promise(resolve => {
const img = new Image();
img.onerror = e => reject(`${url} failed to load`);
img.onload = e => resolve(img);
img.src = url;
})
);
Promise
.all(assetsLoaded)
.then(images => {
(function gameLoop() {
requestAnimationFrame(gameLoop);
ctx.clearRect(0, 0, canvas.width, canvas.height);
images.forEach((e, i) =>
ctx.drawImage(
e,
i * 120, // x
Math.sin(Date.now() * 0.005) * 20 + 40 // y
)
);
})();
})
.catch(err => console.error(err))
;

Create JavaScript Waveform Visualization With Howler.js

I am trying to produce a wave form (https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API) with howler.js . I see the dataArray looping through the draw function. However it only draws a straight line because the v variable always returns 1. I based the code off a pretty common MDN example, this leads me to believe maybe the way I am getting the howler data is incorrect.
HTML
<div id="play">play</div>
<canvas id="canvas"></canvas>
JS
let playing = false
const playBtn = document.getElementById('play')
const canvas = document.getElementById('canvas')
const canvasCtx = canvas.getContext('2d')
const WIDTH = canvas.width
const HEIGHT = canvas.height
let drawVisual = null
/*
files
https://s3-us-west-2.amazonaws.com/s.cdpn.io/481938/Find_My_Way_Home.mp3
*/
/*
streams
'http://rfcmedia.streamguys1.com/MusicPulse.mp3'
*/
let analyser = null
let bufferLength = null
let dataArray = null
const howler = new Howl({
html5: true,
format: ['mp3', 'aac'],
src:
'https://s3-us-west-2.amazonaws.com/s.cdpn.io/481938/Find_My_Way_Home.mp3',
onplay: () => {
analyser = Howler.ctx.createAnalyser()
Howler.masterGain.connect(analyser)
analyser.connect(Howler.ctx.destination)
analyser.fftSize = 2048
analyser.minDecibels = -90
analyser.maxDecibels = -10
analyser.smoothingTimeConstant = 0.85
bufferLength = analyser.frequencyBinCount
dataArray = new Uint8Array(bufferLength)
canvasCtx.clearRect(0, 0, WIDTH, HEIGHT)
const draw = () => {
drawVisual = requestAnimationFrame(draw)
analyser.getByteTimeDomainData(dataArray)
canvasCtx.fillStyle = '#000'
canvasCtx.fillRect(0, 0, WIDTH, HEIGHT)
canvasCtx.lineWidth = 2
canvasCtx.strokeStyle = 'limegreen'
canvasCtx.beginPath()
let sliceWidth = (WIDTH * 1.0) / bufferLength
let x = 0
for (let i = 0; i < bufferLength; i++) {
let v = dataArray[i] / 128.0
let y = (v * HEIGHT) / 2
if (i === 0) {
canvasCtx.moveTo(x, y)
} else {
canvasCtx.lineTo(x, y)
}
x += sliceWidth
}
canvasCtx.lineTo(canvas.width, canvas.height / 2)
canvasCtx.stroke()
}
draw()
}
})
playBtn.addEventListener('click', () => {
if (!playing) {
howler.play()
playing = true
}
})
To get it working:
Remove html5: true
There is a CORS setup isssue with your audio source. What are your bucket CORS settings? Access to XMLHttpRequest at 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/481938/Find_My_Way_Home.mp3' from origin 'null' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
The CORS issue leads to your dataArray being full of 128 basically meaning no sound even though the music is playing.
With that I got your visualizer to work. (You can bypass CORS on chrome "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --disable-web-security)
Here is the code for the waveform:
const data = audioBuffer.getChannelData(0)
context.beginPath()
const last = data.length - 1
for (let i = 0; i <= last; i++) {
context.lineTo(i / last * width, height / 2 - height * data[i])
}
context.strokeStyle = 'white'
context.stroke()
How to get this audioBuffer from howler? I'm not suggesting to try it because howler may not use web audio api. And no way in doc, only digging source code. Instead, here is the code to load this buffer directly:
const url = 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/481938/Find_My_Way_Home.mp3'
const getAudioBuffer = async (url) => {
const context = new AudioContext
result = await new Promise(resolve => {
request = new XMLHttpRequest
request.open "GET", url, true
request.responseType = 'arraybuffer'
request.onload = () => resolve(request.response)
request.send()
}
return await context.decodeAudioData(result)
}
audioBuffer = getAudioBuffer(url)
audioBuffer.getChannelData(0) // it can have multiple channels, each channel is Float32Array
But! This is waveform without animation, track is downloaded and waveform draw.
In your example you are trying to make something animated, using that code above it's possible to make something like window moving from start to end according to playback position.
So my answer is not answer, no animation, no howler, but hope it helps :)

Read Audio File For Web-Audio API Analyzer (Node.Js server, JS Frontend)

I set up a simple Node.js server to serve a .wav file to my local frontend.
require('dotenv').config();
const debugBoot = require('debug')('boot');
const cors = require('cors')
const express = require('express');
const app = express();
app.set('port', process.env.PORT || 3000);
app.use(cors());
app.use(express.static('public'));
const server = app.listen(app.get('port'), () => {
const port = server.address().port;
debugBoot('Server running at http://localhost:' + port);
});
On my local frontend I receive the file:
fetch('http://localhost:3000/audio/8bars60bpmOnlyKick.wav').then(response => process(response.body))
function process(stream) {
console.log(stream);
const context = new AudioContext();
const analyser = context.createAnalyser();
const source = context.createMediaStreamSource(stream);
source.connect(analyser);
const data = new Uint8Array(analyser.frequencyBinCount);
I want to pipe the stream into AudioContext().createMediaStreamSource. I could do this with a Media Stream, e.g., from the microphone.
But with the ReadableStream, I get the error Failed to execute 'createMediaStreamSource' on 'AudioContext': parameter 1 is not of type 'MediaStream'.
I want to serve/receive the audio in a way that I can plug it into the web-audio API and use the analyzer. It wouldn't need to be a stream if there is a nother solution.
I merged basically those both examples together:
https://www.youtube.com/watch?v=hYNJGPnmwls (https://codepen.io/jakealbaugh/pen/jvQweW/)
and the example from the web-audio api:
https://github.com/mdn/webaudio-examples/blob/master/audio-analyser/index.html
let audioBuffer;
let sourceNode;
let analyserNode;
let javascriptNode;
let audioData = null;
let audioPlaying = false;
let sampleSize = 1024; // number of samples to collect before analyzing data
let frequencyDataArray; // array to hold time domain data
// Global Variables for the Graphics
let canvasWidth = 512;
let canvasHeight = 256;
let ctx;
document.addEventListener("DOMContentLoaded", function () {
ctx = document.body.querySelector('canvas').getContext("2d");
// the AudioContext is the primary 'container' for all your audio node objects
try {
audioContext = new AudioContext();
} catch (e) {
alert('Web Audio API is not supported in this browser');
}
// When the Start button is clicked, finish setting up the audio nodes, play the sound,
// gather samples for the analysis, update the canvas
document.body.querySelector('#start_button').addEventListener('click', function (e) {
e.preventDefault();
// Set up the audio Analyser, the Source Buffer and javascriptNode
initCanvas();
setupAudioNodes();
javascriptNode.onaudioprocess = function () {
// get the Time Domain data for this sample
analyserNode.getByteFrequencyData(frequencyDataArray);
// draw the display if the audio is playing
console.log(frequencyDataArray)
draw();
};
loadSound();
});
document.body.querySelector("#stop_button").addEventListener('click', function(e) {
e.preventDefault();
sourceNode.stop(0);
audioPlaying = false;
});
function loadSound() {
fetch('http://localhost:3000/audio/8bars60bpmOnlyKick.wav').then(response => {
response.arrayBuffer().then(function (buffer) {
audioContext.decodeAudioData(buffer).then((audioBuffer) => {
console.log('audioBuffer', audioBuffer);
// {length: 1536000, duration: 32, sampleRate: 48000, numberOfChannels: 2}
audioData = audioBuffer;
playSound(audioBuffer);
});
});
})
}
function setupAudioNodes() {
sourceNode = audioContext.createBufferSource();
analyserNode = audioContext.createAnalyser();
analyserNode.fftSize = 4096;
javascriptNode = audioContext.createScriptProcessor(sampleSize, 1, 1);
// Create the array for the data values
frequencyDataArray = new Uint8Array(analyserNode.frequencyBinCount);
// Now connect the nodes together
sourceNode.connect(audioContext.destination);
sourceNode.connect(analyserNode);
analyserNode.connect(javascriptNode);
javascriptNode.connect(audioContext.destination);
}
function initCanvas() {
ctx.fillStyle = 'hsl(280, 100%, 10%)';
ctx.fillRect(0, 0, canvasWidth, canvasHeight);
};
// Play the audio once
function playSound(buffer) {
sourceNode.buffer = buffer;
sourceNode.start(0); // Play the sound now
sourceNode.loop = false;
audioPlaying = true;
}
function draw() {
const data = frequencyDataArray;
const dataLength = frequencyDataArray.length;
console.log("data", data);
const h = canvasHeight / dataLength;
// draw on the right edge
const x = canvasWidth - 1;
// copy the old image and move one left
let imgData = ctx.getImageData(1, 0, canvasWidth - 1, canvasHeight);
ctx.fillRect(0, 0, canvasWidth, canvasHeight);
ctx.putImageData(imgData, 0, 0);
for (let i = 0; i < dataLength; i++) {
// console.log(data)
let rat = data[i] / 255;
let hue = Math.round((rat * 120) + 280 % 360);
let sat = '100%';
let lit = 10 + (70 * rat) + '%';
// console.log("rat %s, hue %s, lit %s", rat, hue, lit);
ctx.beginPath();
ctx.strokeStyle = `hsl(${hue}, ${sat}, ${lit})`;
ctx.moveTo(x, canvasHeight - (i * h));
ctx.lineTo(x, canvasHeight - (i * h + h));
ctx.stroke();
}
}
});
I explain shortly what each part does:
creating audio context
When the DOM loads the AudioContext is created.
loading the audio file and converting it to AudioBuffer
Then I load the sound from by backend server (the code is as shown above). The response is then converted to a buffer which is then decoded to an AudioBuffer. This is basically the main solution for the question above.
Process AudioBuffer
To show a little bit more context how to use the loaded audio file I included the rest of the file.
To further process the AudioBuffer a source is created and the buffer is assigned to the source: sourceNode.buffer = buffer.
The javascriptNode acts IMHO like a stream where you can access the output of the analyzer.

jpg loaded with python keras is different from jpg loaded in javascript

I am loading jpg image in python on the server. Then I am loading the same jpg image with javascript on the client. Finally, I am trying to compare it with the python output. But loaded data are different so images do not match. Where do I have a mistake?
Python code
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
filename = './rcl.jpg'
original = load_img(filename)
numpy_image = img_to_array(original)
print(numpy_image)
JS code
import * as tf from '#tensorflow/tfjs';
photo() {
var can = document.createElement('canvas');
var ctx = can.getContext("2d");
var img = new Image();
img.onload = function() {
ctx.drawImage(img, 0, 0);
};
img.crossOrigin = "anonymous";
img.src = './rcl.jpg';
var tensor = tf.fromPixels(can).toFloat();
tensor.print()
}
You are drawing the image on a canvas before rendering the canvas as a tensor. Drawing on a canvas can alter the shape of the initial image. For instance, unless specified otherwise - which is the case with your code - the canvas is created with a width of 300 px and a height of 150 px. Therefore the resulting shape of the tensor will be more or less something like the following [150, 300, 3].
1- Using Canvas
Canvas are suited to resize an image as one can draw on the canvas all or part of the initial image. In that case, one needs to resize the canvas.
const canvas = document.create('canvas')
// canvas has initial width: 300px and height: 150px
canvas.width = image.width
canvas.height = image.heigth
// canvas is set to redraw the initial image
const ctx = canvas.getContext('2d')
ctx.drawImage(image, 0, 0) // to draw the entire image
One word of caution though: all the above piece should be executed after the image has finished loading using the event handler onload as the following
const im = new Image()
im.crossOrigin = 'anonymous'
im.src = 'url'
// document.body.appendChild(im) (optional if the image should be displayed)
im.onload = () => {
const canvas = document.create('canvas')
canvas.width = image.width
canvas.height = image.heigth
const ctx = canvas.getContext('2d')
ctx.drawImage(image, 0, 0)
}
or using async/await
function load(url){
return new Promise((resolve, reject) => {
const im = new Image()
im.crossOrigin = 'anonymous'
im.src = 'url'
im.onload = () => {
resolve(im)
}
})
}
// use the load function inside an async function
(async() => {
const image = await load(url)
const canvas = document.create('canvas')
canvas.width = image.width
canvas.height = image.heigth
const ctx = canvas.getContext('2d')
ctx.drawImage(image, 0, 0)
})()
2- Using fromPixel on the image directly
If the image is not to be resized, you can directly render the image as a tensor using fromPixel on the image itself

Categories

Resources