I'm writing one of those simple games to learn JS and I'm learning HTML5 in the process so I need to draw things on canvas.
Here's the code:
let paddle = new Paddle(GAME_WIDTH,GAME_HEIGHT);
new InputHandler(paddle);
let lastTime = 0;
const ball = new Image();
ball.src = 'assets/ball.png';
function gameLoop(timeStamp){
let dt = timeStamp - lastTime;
lastTime = timeStamp;
ctx.clearRect(0,0,600,600);
paddle.update(dt);
paddle.draw(ctx);
ball.onload = () => {
ctx.drawImage(ball,20,20);
}
window.requestAnimationFrame(gameLoop);
}
gameLoop();
screenshot: no ball
before comment
now I comment out the clearRect():
after comment
hello ball.
There's also a paddle at the bottom of the canvas that doesn't seem to be affected by the clearRect() method. It works just fine. What am I missing here?
It doesn't make much sense to put the image's onload handler inside the game loop. This means the game has to begin running before the image's onload function is set, leading to a pretty confusing situation.
The correct sequence is to set the onload handlers, then the image sources, then await all of the image onloads firing before running the game loop. Setting the main loop to an onload directly is pretty easy when you only have one image, but for a game with multiple assets, this can get awkward quickly.
Here's a minimal example of how you might load many game assets using Promise.all. Very likely, you'll want to unpack the loaded images into more descriptive objects rather than an array, but this is a start.
const canvas = document.createElement("canvas");
document.body.appendChild(canvas);
canvas.width = 400;
canvas.height = 250;
const ctx = canvas.getContext("2d");
const assets = [
"http://placekitten.com/120/100",
"http://placekitten.com/120/120",
"http://placekitten.com/120/140",
];
const assetsLoaded = assets.map(url =>
new Promise(resolve => {
const img = new Image();
img.onerror = e => reject(`${url} failed to load`);
img.onload = e => resolve(img);
img.src = url;
})
);
Promise
.all(assetsLoaded)
.then(images => {
(function gameLoop() {
requestAnimationFrame(gameLoop);
ctx.clearRect(0, 0, canvas.width, canvas.height);
images.forEach((e, i) =>
ctx.drawImage(
e,
i * 120, // x
Math.sin(Date.now() * 0.005) * 20 + 40 // y
)
);
})();
})
.catch(err => console.error(err))
;
Related
I want to record a video from a HTML <canvas> element at a specific frame rate.
I am using CanvasCaptureMediaStream with canvas.captureStream(fps) and also have access to the video track via const track = stream.getVideoTracks()[0] so I create track.requestFrame() to write it to the output video buffer via MediaRecorder.
I want to precisely capture one frame at a time and then change the canvas content. Changing the canvas content can take some time (as images need to be loaded etc). So I can not capture the canvas in real-time.
Some changes on the canvas would happen in 500ms real-time so this needs also to be adjusted to rendering one frame at the time.
The MediaRecorder API is meant to record live-streams, doing edition is not what it was designed to do, and it doesn't do it very well to be honest...
The MediaRecorder itself has no concept of frame-rate, this is normally defined by the MediaStreamTrack. However, the CanvasCaptureStreamTrack doesn't really make it clear what its frame rate is.
We can pass a parameter to HTMLCanvas.captureStream(), but this only tells the max frames we want per seconds, it's not really an fps parameter.
Also, even if we stop drawing on the canvas, the recorder will still continue to extend the duration of the recorded video in real time (I think that technically only a single long frame is recorded though in this case).
So... we gonna have to hack around...
One thing we can do with the MediaRecorder is to pause() and resume() it.
Then sounds quite easy to pause before doing the long drawing operation and to resume right after it's been made? Yes... and not that easy either...
Once again, the frame-rate is dictated by the MediaStreamTrack, but this MediaStreamTrack can not be paused.
Well, actually there is one way to pause a special kind of MediaStreamTrack, and luckily I'm talking about CanvasCaptureMediaStreamTracks.
When we do call our capture-stream with a parameter of 0, we are basically having manual control over when new frames are added to the stream.
So here we can synchronize both our MediaRecorder adn our MediaStreamTrack to whatever frame-rate we want.
The basic workflow is
await the_long_drawing_task;
resumeTheRecorder();
writeTheFrameToStream(); // track.requestFrame();
await wait( time_per_frame );
pauseTheRecorder();
Doing so, the recorder is awaken only the time per frame we decided, and a single frame is passed to the MediaStream during this time, effectively mocking a constant FPS drawing for what the MediaRecorder is concerned.
But as always, hacks in this still experimental area come with a lot of browsers weirdness and the following demo actually only works in current Chrome...
For whatever reasons, Firefox will always generate files with twice the number of frames than what has been requested, and it will also occasionally prepend a long first frame...
Also to be noted, Chrome has a bug where it will update the canvas stream at drawing, even though we initiated this stream with a frameRequestRate of 0. So this means that if you start drawing before everything is ready, or if the drawing on your canvas itself takes a long time, then our recorder will record half-baked frames that we didn't asked for.
To workaround this bug, we thus need to use a second canvas, used only for the streaming. All we'll do on that canvas is to drawImage the source one, which will always be a fast enough operation. to not face that bug.
class FrameByFrameCanvasRecorder {
constructor(source_canvas, FPS = 30) {
this.FPS = FPS;
this.source = source_canvas;
const canvas = this.canvas = source_canvas.cloneNode();
const ctx = this.drawingContext = canvas.getContext('2d');
// we need to draw something on our canvas
ctx.drawImage(source_canvas, 0, 0);
const stream = this.stream = canvas.captureStream(0);
const track = this.track = stream.getVideoTracks()[0];
// Firefox still uses a non-standard CanvasCaptureMediaStream
// instead of CanvasCaptureMediaStreamTrack
if (!track.requestFrame) {
track.requestFrame = () => stream.requestFrame();
}
// prepare our MediaRecorder
const rec = this.recorder = new MediaRecorder(stream);
const chunks = this.chunks = [];
rec.ondataavailable = (evt) => chunks.push(evt.data);
rec.start();
// we need to be in 'paused' state
waitForEvent(rec, 'start')
.then((evt) => rec.pause());
// expose a Promise for when it's done
this._init = waitForEvent(rec, 'pause');
}
async recordFrame() {
await this._init; // we have to wait for the recorder to be paused
const rec = this.recorder;
const canvas = this.canvas;
const source = this.source;
const ctx = this.drawingContext;
if (canvas.width !== source.width ||
canvas.height !== source.height) {
canvas.width = source.width;
canvas.height = source.height;
}
// start our timer now so whatever happens between is not taken in account
const timer = wait(1000 / this.FPS);
// wake up the recorder
rec.resume();
await waitForEvent(rec, 'resume');
// draw the current state of source on our internal canvas (triggers requestFrame in Chrome)
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.drawImage(source, 0, 0);
// force write the frame
this.track.requestFrame();
// wait until our frame-time elapsed
await timer;
// sleep recorder
rec.pause();
await waitForEvent(rec, 'pause');
}
async export () {
this.recorder.stop();
this.stream.getTracks().forEach((track) => track.stop());
await waitForEvent(this.recorder, "stop");
return new Blob(this.chunks);
}
}
///////////////////
// how to use:
(async() => {
const FPS = 30;
const duration = 5; // seconds
let x = 0;
let frame = 0;
const ctx = canvas.getContext('2d');
ctx.textAlign = 'right';
draw(); // we must have drawn on our canvas context before creating the recorder
const recorder = new FrameByFrameCanvasRecorder(canvas, FPS);
// draw one frame at a time
while (frame++ < FPS * duration) {
await longDraw(); // do the long drawing
await recorder.recordFrame(); // record at constant FPS
}
// now all the frames have been drawn
const recorded = await recorder.export(); // we can get our final video file
vid.src = URL.createObjectURL(recorded);
vid.onloadedmetadata = (evt) => vid.currentTime = 1e100; // workaround https://crbug.com/642012
download(vid.src, 'movie.webm');
// Fake long drawing operations that make real-time recording impossible
function longDraw() {
x = (x + 1) % canvas.width;
draw(); // this triggers a bug in Chrome
return wait(Math.random() * 300)
.then(draw);
}
function draw() {
ctx.fillStyle = 'white';
ctx.fillRect(0, 0, canvas.width, canvas.height);
ctx.fillStyle = 'black';
ctx.fillRect(x, 0, 50, 50);
ctx.fillText(frame + " / " + FPS * duration, 290, 140);
};
})().catch(console.error);
<canvas id="canvas"></canvas>
<video id="vid" controls></video>
<script>
// Some helpers
// Promise based timer
function wait(ms) {
return new Promise(res => setTimeout(res, ms));
}
// implements a sub-optimal monkey-patch for requestPostAnimationFrame
// see https://stackoverflow.com/a/57549862/3702797 for details
if (!window.requestPostAnimationFrame) {
window.requestPostAnimationFrame = function monkey(fn) {
const channel = new MessageChannel();
channel.port2.onmessage = evt => fn(evt.data);
requestAnimationFrame((t) => channel.port1.postMessage(t));
};
}
// Promisifies EventTarget.addEventListener
function waitForEvent(target, type) {
return new Promise((res) => target.addEventListener(type, res, {
once: true
}));
}
// creates a downloadable anchor from url
function download(url, filename = "file.ext") {
a = document.createElement('a');
a.textContent = a.download = filename;
a.href = url;
document.body.append(a);
return a;
}
</script>
I asked a similar question which has been linked to this one. In the meantime I came up with a solution which overlaps Kaiido's and which I think is worth reading.
I added two tricks:
I deferred the next render (see code), which fixes the problem of Firefox generating twice the number of frames
I stored an accumulated timing error to correct setTimeout's inaccuracies. I personally used it to tweak the progression of my render and for example skip frames if there is a sudden latency and keep the duration of the video close to the target duration. It is not enough to smoothen setTimeout though.
const recordFrames = (onstop, canvas, fps=30) => {
const chunks = [];
// get Firefox to initialise the canvas
canvas.getContext('2d').fillRect(0, 0, 0, 0);
const stream = canvas.captureStream();
const recorder = new MediaRecorder(stream);
recorder.addEventListener('dataavailable', ({data}) => chunks.push(data));
recorder.addEventListener('stop', () => onstop(new Blob(chunks)));
const frameDuration = 1000 / fps;
const frame = (next, start) => {
recorder.pause();
api.error += Date.now() - start - frameDuration;
setTimeout(next, 0); // helps Firefox record the right frame duration
};
const api = {
error: 0,
init() {
recorder.start();
recorder.pause();
},
step(next) {
recorder.resume();
setTimeout(frame, frameDuration, next, Date.now());
},
stop: () => recorder.stop()
};
return api;
}
how to use
const fps = 30;
const duration = 5000;
const animation = Something;
const videoOutput = blob => {
const video = document.createElement('video');
video.src = URL.createObjectURL(blob);
document.body.appendChild(video);
}
const recording = recordFrames(videoOutput, canvas, fps);
const startRecording = () => {
recording.init();
animation.play();
};
// I am assuming you can call these from your library
const onAnimationRender = nextFrame => recording.step(nextFrame);
const onAnimationEnd = () => recording.step(recording.stop);
let now = 0;
const progression = () => {
now = now + 1 + recorder.error * fps / 1000;
recorder.error = 0;
return now * 1000 / fps / duration
}
I found this solution to be satisfying at 30fps in both Chrome and Firefox. I didn't experience the Chrome bugs mentionned by Kaiido and thus didn't implement anything to deal with them.
I'm rewritng a small javascript for being able to put it in a worker.js like it is documented here:
Mozilla - Web_Workers_API
The worker.js shall display an image on an OffscreenCanvas like it is documented here:
Mozilla - OfscreenCanvas documentation
The initial script is using the following statement that obviously cannot be used in a worker.js file, because there is no "document":
var imgElement = document.createElement("img");
imgElement.src = canvas.toDataURL("image/png");
But how can I substitue the
document.createElement("img");
statement in the worker.js for still being able to use the second statement:
imgElement.src = canvas.toDataURL("image/png");
If anyone has any idea, it would be really appreciated. :)
Just don't.
Instead of exporting the canvas content and make the browser decode that image only to display it, simply display the HTMLCanvasElement directly.
This advice already stood for before you switched to an OffscreenCanvas, but it still does.
Then how to draw on an OffscreenCanvas in a Worker and still display it? I hear you ask.
Well, you can request an OffscreenCanvas from an HTMLCanvasElement through its transferControlToOffscreen() method.
So the way to go is, in the UI thread, you genereate the <canvas> element that will be used for displaying the image, and you generate an OffscreenCanvas from it. Then you start your Worker to which you'll transfer the OffscreenCanvas.
In the Worker you'll wait for the OffscreenCanvas in the onmessage event and grab the context and draw on it.
UI thread
const canvas = document.createElement("canvas");
const offscreen = canvas.transferControlToOffscreen();
const worker = new Worker(url);
worker.postMessage(offscreen, [offscreen]);
container.append(canvas);
Worker thread
onmessage = (evt) => {
const canvas = evt.data;
const ctx = canvas.getContext(ctx_type);
//...
All the drawings made from the Worker will get painted on the visible canvas, without blocking the UI thread at all.
const canvas = document.querySelector("canvas");
const offscreen = canvas.transferControlToOffscreen();
const worker = new Worker(getWorkerURL());
worker.postMessage(offscreen, [offscreen]);
function getWorkerURL() {
const worker_script = `
onmessage = (evt) => {
const canvas = evt.data;
const w = canvas.width = 500;
const h = canvas.height = 500;
const ctx = canvas.getContext("2d");
// draw some noise
const img = new ImageData(w,h);
const arr = new Uint32Array(img.data.buffer);
for( let i=0; i<arr.length; i++ ) {
arr[i] = Math.random() * 0xFFFFFFFF;
}
ctx.putImageData(img, 0, 0);
for( let i = 0; i < 500; i++ ) {
ctx.arc( Math.random() * w, Math.random() * h, Math.random() * 20, 0, Math.PI*2 );
ctx.closePath();
}
ctx.globalCompositeOperation = "xor";
ctx.fill();
};
`;
const blob = new Blob( [ worker_script ] );
return URL.createObjectURL( blob );
}
canvas { border: 1px solid; }
<canvas></canvas>
I have a canvas where I use drawImage to draw a bunch of images to the canvas.
How I want the result to be:
I want the first image i draw to be on layer 1, the next image on layer 2 and so on
What really happens:
The images get placed on random layers.
const images = [
'https://attefallsverket.picarioxpo.com/1_series_base.jpg?1=1&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_housebase.png?1=1&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_roof_panels.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_windows.pfs?1=1&p.c=71343a&p.tn=&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_door_01.pfs?1=1&p.c=&p.tn=rainsystem_grey.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_01.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_panels.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_corners.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_tin_windows.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_tin_roof.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_roof_metal_orange.png?1=1&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_rain_system.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1_series_terrace.png?1=1&width=2000',
];
let c = document.getElementById("myCanvas");
var ctx = c.getContext("2d");
for(let i=0; i<images.length; i++) {
let img = new Image();
img.crossOrigin = '';
img.src = images[i]
img.onload = () => {
ctx.drawImage(img, 0, 0, c.width, c.height);
}
}
<canvas id="myCanvas" width="280" height="157.5" style="border:1px solid #d3d3d3;">
Your browser does not support the HTML5 canvas tag.
</canvas>
You would need to ensure that the first image has loaded before launching the load of the next. So make an asynchronous loop:
const images = [
'https://attefallsverket.picarioxpo.com/1_series_base.jpg?1=1&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_housebase.png?1=1&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_roof_panels.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_windows.pfs?1=1&p.c=71343a&p.tn=&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_door_01.pfs?1=1&p.c=&p.tn=rainsystem_grey.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_01.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_panels.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_corners.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_tin_windows.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_tin_roof.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_roof_metal_orange.png?1=1&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_rain_system.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1_series_terrace.png?1=1&width=2000',
];
let c = document.getElementById("myCanvas");
let ctx = c.getContext("2d");
(function loop(i) {
if (i >= images.length) return; // all done
let img = new Image();
img.crossOrigin = '';
img.onload = () => {
ctx.drawImage(img, 0, 0, c.width, c.height);
loop(i+1); // continue with next...
}
img.src = images[i];
})(0); // start loop with first image
<canvas id="myCanvas" width="280" height="157.5"</canvas>
Here's what you really want to do:
addEventListener('load', ()=>{ // page and script load
const images = [
'https://attefallsverket.picarioxpo.com/1_series_base.jpg?1=1&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_housebase.png?1=1&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_roof_panels.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_windows.pfs?1=1&p.c=71343a&p.tn=&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_door_01.pfs?1=1&p.c=&p.tn=rainsystem_grey.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_01.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_panels.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_corners.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_tin_windows.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_tin_roof.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_roof_metal_orange.png?1=1&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_rain_system.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1_series_terrace.png?1=1&width=2000'
];
const canvas = document.getElementById('myCanvas'), ctx = canvas.getContext('2d'), promises = [];
let w = canvas.width, h = canvas.height, p;
for(let m of images){
p = new Promise(r=>{
const im = new Image;
im.onload = ()=>{
r(im);
}
im.src = m;
});
promises.push(p);
}
Promise.all(promises).then(imgs=>{
for(let im of imgs){
ctx.drawImage(im, 0, 0, w, h);
}
});
}); // end page load
<canvas id='myCanvas' width='280' height='157.5'></canvas>
The problem is that you you can't really control how long it will take the browser to download each image. So the first image that fires the onload event might not be the first image in the array - likewise the second picture might be 10th in the array and so on.
To workaround I'd recommend going through your images array one by one and start loading a new image as soon as the last image finished loading.
Here's an example:
const images = [
'https://attefallsverket.picarioxpo.com/1_series_base.jpg?1=1&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_housebase.png?1=1&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_roof_panels.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_windows.pfs?1=1&p.c=71343a&p.tn=&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_door_01.pfs?1=1&p.c=&p.tn=rainsystem_grey.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_01.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_panels.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_facade_corners.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_tin_windows.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_tin_roof.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_roof_metal_orange.png?1=1&width=2000',
'https://attefallsverket.picarioxpo.com/1kp_rain_system.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000',
'https://attefallsverket.picarioxpo.com/1_series_terrace.png?1=1&width=2000',
];
let imagesLoaded = 0;
let c = document.getElementById("myCanvas");
var ctx = c.getContext("2d");
function loadImage() {
let img = new Image();
img.crossOrigin = '';
img.onload = () => {
ctx.drawImage(img, 0, 0, c.width, c.height);
if (imagesLoaded + 1 < images.length) {
imagesLoaded++;
loadImage(imagesLoaded);
}
}
img.src = images[imagesLoaded];
}
loadImage(0)
<canvas id="myCanvas" width="280" height="157.5" style="border:1px solid #d3d3d3;">
Your browser does not support the HTML5 canvas tag.
</canvas>
Existing answers solve the problem, but they're serialized and don't fire off requests simultaneously. If you want to optimize for getting something on the screen and don't care how long it takes to reach the complete image, this is fine, but if your goal is to get the entire image drawn as quickly as possible and/or not show a partially-completed image, one-by-one network requests are suboptimal.
Instead of this:
request image 0
wait for request 0 over the wire or file IO
draw image 0
request image 1
wait for request 1 over the wire or file IO
draw image 1
...
request image n
wait for request n over the wire or file IO
draw image n
It might make sense to:
request/load all images at once
wait for all images to be received
draw all images in order
The idea is to exploit parallelism and only wait for one image (the slowest) to arrive, overlapping all other requests within the slowest load time, rather than incurring the cost of loading all n images one at a time.
A good way to do this is with promises. You can promisify the onload and onerror callbacks to resolve and reject respectively, then use Promise.all to wait for all images to arrive, at which point you can apply a traditional, synchronous loop to draw the layers in order.
const images = ['https://attefallsverket.picarioxpo.com/1_series_base.jpg?1=1&width=2000','https://attefallsverket.picarioxpo.com/1kp_housebase.png?1=1&width=2000','https://attefallsverket.picarioxpo.com/1kp_facade_roof_panels.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000','https://attefallsverket.picarioxpo.com/1kp_windows.pfs?1=1&p.c=71343a&p.tn=&width=2000','https://attefallsverket.picarioxpo.com/1kp_door_01.pfs?1=1&p.c=&p.tn=rainsystem_grey.jpg&width=2000','https://attefallsverket.picarioxpo.com/1kp_facade_01.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000','https://attefallsverket.picarioxpo.com/1kp_facade_panels.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000','https://attefallsverket.picarioxpo.com/1kp_facade_corners.pfs?1=1&p.c=&p.tn=wooden_summer_green.jpg&width=2000','https://attefallsverket.picarioxpo.com/1kp_tin_windows.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000','https://attefallsverket.picarioxpo.com/1kp_tin_roof.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000','https://attefallsverket.picarioxpo.com/1kp_roof_metal_orange.png?1=1&width=2000','https://attefallsverket.picarioxpo.com/1kp_rain_system.pfs?1=1&p.c=&p.tn=rainsystem_white.jpg&width=2000','https://attefallsverket.picarioxpo.com/1_series_terrace.png?1=1&width=2000',];
const c = document.getElementById("myCanvas");
const ctx = c.getContext("2d");
Promise.all(images.map(url =>
new Promise((resolve, reject) => {
const img = new Image();
img.crossOrigin = "";
img.onerror = e => reject(`${url} failed to load`);
img.onload = function () {
resolve(this);
};
img.src = url;
})))
.then(images =>
images.forEach(e =>
ctx.drawImage(e, 0, 0, c.width, c.height)
)
)
.catch(err => console.error(err))
;
<canvas id="myCanvas" width="280" height="157.5" style="border:1px solid #d3d3d3;">
Your browser does not support the HTML5 canvas tag.
</canvas>
If your goal is to get something on screen as quickly as possible, you could combine the two approaches, doing one fast, serial request for the background, then doing the rest in parallel, or even in batches. But this feels like overkill for this case; I mention the technique for completeness.
I'm working on a small canvas animation that requires me to step through a large sprite sheet png so I'm getting a lot of mileage out of drawImage(). I've never had trouble in the past using it, but today I'm running into an odd blocking delay after firing drawImage.
My understanding is that drawImage is synchronous, but when I run this code drawImage fired! comes about 700ms before the image actually appears. It's worth noting it's 700ms in Chrome and 1100ms in Firefox.
window.addEventListener('load', e => {
console.log("page loaded");
let canvas = document.getElementById('pcb');
let context = canvas.getContext("2d");
let img = new Image();
img.onload = function() {
context.drawImage(
img,
800, 0,
800, 800,
0, 0,
800, 800
);
console.log("drawImage fired!");
};
img.src = "/i/sprite-comp.png";
});
In the larger context this code runs in a requestAnimationFrame loop and I only experience this delay during the first execution of drawImage.
I think this is related to the large size of my sprite sheet (28000 × 3200) # 600kb though the onload event seems to be firing correctly.
edit: Here's a printout of the time (ms) between rAF frames. I get this result consistently unless I remove the drawImage function.
That's because the load event only is a network event. It only tells that the browser has fetched the media, parsed the metadata, and has recognized it is a valid media file it can decode.
However, the rendering part may still not have been made when this event fires, and that's why you have a first rendering that takes so much time. (Though it used to be an FF only behavior..)
Because yes drawImage() is synchronous, It will thus make that decoding + rendering a synchrounous operation too. It's so true, that you can even use drawImage as a way to tell when an image really is ready..
Note that there is now a decode() method on the HTMLImageElement interface that will tell us exactly about this, in a non-blocking means, so it's better to use it when available, and to anyway perform warming rounds of all your functions off-screen before running an extensive graphic app.
But since your source image is a sprite-sheet, you might actually be more interested in the createImageBitmap() method, which will generate an ImageBitmap from your source image, optionally cut off. These ImageBitmaps are already decoded and can be drawn to the canvas with no delay. It should be your preferred way since it will also avoid that you draw the whole sprite-sheet every time. And for browsers that don't support this method, you can monkey patch it by returning an HTMLCanvasElement with the part of the image drawn on it:
if (typeof window.createImageBitmap !== "function") {
window.createImageBitmap = monkeyPatch;
}
var img = new Image();
img.crossOrigin = "anonymous";
img.src = "https://upload.wikimedia.org/wikipedia/commons/b/be/SpriteSheet.png";
img.onload = function() {
makeSprites()
.then(draw);
};
function makeSprites() {
var coords = [],
x, y;
for (y = 0; y < 3; y++) {
for (x = 0; x < 4; x++) {
coords.push([x * 132, y * 97, 132, 97]);
}
}
return Promise.all(coords.map(function(opts) {
return createImageBitmap.apply(window, [img].concat(opts));
})
);
}
function draw(sprites) {
var delay = 96;
var current = 0,
lastTime = performance.now(),
ctx = document.getElementById('canvas').getContext('2d');
anim();
function anim(t) {
requestAnimationFrame(anim);
if (t - lastTime < delay) return;
lastTime = t;
current = (current + 1) % sprites.length;
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height)
ctx.drawImage(sprites[current], 0, 0);
}
}
function monkeyPatch(source, sx, sy, sw, sh) {
return Promise.resolve()
.then(drawImage);
function drawImage() {
var canvas = document.createElement('canvas');
canvas.width = sw || source.naturalWidth || source.videoWidth || source.width;
canvas.height = sh || source.naturalHeight || source.videoHeight || source.height;
canvas.getContext('2d').drawImage(source,
sx || 0, sy || 0, canvas.width, canvas.height,
0, 0, canvas.width, canvas.height
);
return canvas;
}
}
<canvas id="canvas" width="132" height="97"></canvas>
I am loading jpg image in python on the server. Then I am loading the same jpg image with javascript on the client. Finally, I am trying to compare it with the python output. But loaded data are different so images do not match. Where do I have a mistake?
Python code
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
filename = './rcl.jpg'
original = load_img(filename)
numpy_image = img_to_array(original)
print(numpy_image)
JS code
import * as tf from '#tensorflow/tfjs';
photo() {
var can = document.createElement('canvas');
var ctx = can.getContext("2d");
var img = new Image();
img.onload = function() {
ctx.drawImage(img, 0, 0);
};
img.crossOrigin = "anonymous";
img.src = './rcl.jpg';
var tensor = tf.fromPixels(can).toFloat();
tensor.print()
}
You are drawing the image on a canvas before rendering the canvas as a tensor. Drawing on a canvas can alter the shape of the initial image. For instance, unless specified otherwise - which is the case with your code - the canvas is created with a width of 300 px and a height of 150 px. Therefore the resulting shape of the tensor will be more or less something like the following [150, 300, 3].
1- Using Canvas
Canvas are suited to resize an image as one can draw on the canvas all or part of the initial image. In that case, one needs to resize the canvas.
const canvas = document.create('canvas')
// canvas has initial width: 300px and height: 150px
canvas.width = image.width
canvas.height = image.heigth
// canvas is set to redraw the initial image
const ctx = canvas.getContext('2d')
ctx.drawImage(image, 0, 0) // to draw the entire image
One word of caution though: all the above piece should be executed after the image has finished loading using the event handler onload as the following
const im = new Image()
im.crossOrigin = 'anonymous'
im.src = 'url'
// document.body.appendChild(im) (optional if the image should be displayed)
im.onload = () => {
const canvas = document.create('canvas')
canvas.width = image.width
canvas.height = image.heigth
const ctx = canvas.getContext('2d')
ctx.drawImage(image, 0, 0)
}
or using async/await
function load(url){
return new Promise((resolve, reject) => {
const im = new Image()
im.crossOrigin = 'anonymous'
im.src = 'url'
im.onload = () => {
resolve(im)
}
})
}
// use the load function inside an async function
(async() => {
const image = await load(url)
const canvas = document.create('canvas')
canvas.width = image.width
canvas.height = image.heigth
const ctx = canvas.getContext('2d')
ctx.drawImage(image, 0, 0)
})()
2- Using fromPixel on the image directly
If the image is not to be resized, you can directly render the image as a tensor using fromPixel on the image itself