I am loading an image in React from a URL external to my app:
<img id={selectedImage.id} src={selectedImage.url} \>
Now I want the user to be able to press a button and the app should store the image content as base64 data in some variable.
Is there any way for me to reference this specific <img> in React and convert the image content into its base64 text representation?
I have seen various solutions on Stack Overflow. Some of these solutions attempt to retrieve the image from a URL again (which caused CORS issues in my case and also represents redundant loading). E.g. I have looked here:
Convert image from url to Base64
If I wanted to implement this solution I would not know how to do so in a React environment:
I don't know how this document.getElementById("imageid") works in a React environment; and
I would also not know where to place the function in React that is supposed to draw a canvas.
you can use ref and when image load call getBas64Image method and pass img tag to it
import { useEffect, useRef } from "react";
function getBase64Image(img) {
var canvas = document.createElement("canvas");
canvas.width = img.width;
canvas.height = img.height;
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
var dataURL = canvas.toDataURL("image/png");
return dataURL.replace(/^data:image\/?[A-z]*;base64,/);
}
//var base64 = getBase64Image(document.getElementById("imageid"));
//method 1: use ref
export function Img({imageSource}){
const imageRef = useRef();
imageRef.current.onload = ()=>getBase64Image(imageRef.current);
// imageRef.current is same of document.getElementById
return <img ref={imageRef} src={imageSource} />
}
I am trying to predict drawing made on html canvas element with the help of Teachable Machine tensorflow.js. However when I am trying to predict using model.predict(image) it is giving me the same output as [0.2351463884,0.76485371] . I have tried using an html image element and it is working fine. The problem is only when I use Html canvas element.
const URL = "https://teachablemachine.withgoogle.com/models/S5O-110Gl/";
let model, labelContainer, maxPredictions;
// predict drawing on canvas when button is pressed
document.getElementById('start').addEventListener('click', async function() {
const modelURL = URL + "model.json";
const metadataURL = URL + "metadata.json";
// load the model and metadata
// Refer to tmImage.loadFromFiles() in the API to support files from a file picker
// or files from your local hard drive
// Note: the pose library adds "tmImage" object to your window (window.tmImage)
model = await tmImage.load(modelURL, metadataURL);
maxPredictions = model.getTotalClasses();
var canvas = document.getElementById('myCanvas');
const prediction = await model.predict(image);
console.log(prediction)
});
The TensorFlow algorithm probably does not accept canvas as a source, only image elements. But you can draw a canvas back into an image.
<canvas id="canvas" width="640" height="360"></canvas>
let canvas = document.getElementById("canvas");
let image = new Image();
image.src = canvas.toDataURL();
document.appendChild(image);
I'm making a dream journal application in Electron and Svelte. I have a custom file format containing a title, description, and one or more images. See:
Program Input
File Output
When I need to, I can call ipcRenderer.invoke() to read the file in the main process, then return that to retrieve it in the renderer process (don't worry, I'm using async await to make sure I'm not just getting a promise. Also, for my testing, I'm only sending back the Uint8Array representative of the image).
After attempting to display the image and failing, I decided I'd check to see that I was receiving the information as intended. I sent the response as-is back to the main process and wrote it to a file. When I opened the file in Paint, it displayed.
So, the information is correct. This is the code I tried to display the image with:
within <script>
let src;
onMount(async () => {
let a = await ipcRenderer.invoke("file-read");
console.log(a);
let blob = new Blob(a, {type: 'image/png'});
console.log(blob);
ipcRenderer.send("verify-content", a); // this is the test I mentioned, where it was written to a file
src = URL.createObjectURL(blob);
});
in the body
{#if src}
<img src={src} />
{/if}
I also tried it another way:
within <script>
onMount(async () => {
let a = await ipcRenderer.invoke("file-read");
console.log(a);
let blob = new Blob(a, {type: 'image/png'});
console.log(blob);
ipcRenderer.send("verify-content", a);
const img = document.createElement("img");
img.src = URL.createObjectURL(blob);
img.onload = function() {
URL.revokeObjectURL(this.src);
}
document.getElementById("target").appendChild(img);
});
in the body
<div id="target"></div>
However, this is all I got:
It does not display. How can I display this image? All the other "blob to img" examples I found used the type="file" <input /> tag. If possible, I'd like to avoid using a Base64 data URI. Thanks.
It turns out that I have to wrap my Uint8Array in an array when I make a blob out of it (wtf).
let blob = new Blob([a], {type: "image/png"});
I want to capture the screenshot to a image not the video. I found ways to do that but they are either capturing the current web page or recoding the video from entire screen. I found This library to record screen. This is doing something similar which WebRTC does. My requirement is to just take an image of entire screen programmatically from my web application written in plain javascript. Is there any way I can do it ?
Thanks
From the MediaStream you get through the Media Capture and Streams API, you can create an ImageCapture instance and call its grabFrame() method that will produce an ImageBitmap you'll be able to paint on a <canvas>.
const stream = await navigator.mediaDevices.getDisplayMedia();
const track = stream.getVideoTracks()[0];
const capture = new ImageCapture(track);
// when you need the still image
const bitmap = await capture.grabFrame();
// if you want a Blob version
const canvas = document.createElement("canvas");
canvas.width = bitmap.width;
canvas.height = bitmap.height;
canvas.getContext("bitmaprenderer").transferFromImageBitmap(bitmap);
const blob = await new Promise((res) => canvas.toBlob(res));
Now, I should note that this ideal path is currently only accessible to Chromium based browsers.
For other browsers you'd need to set the srcObject of an HTMLVideoElement to the MediaStream, and to drawImage that HTMLVideoElement on a 2D context.
I recently used canvas to conert images to webp, using :
const dataUrl = canvas.toDataURL('image/webp');
But this takes a lots of time for certain images, like 400ms.
I got a warning from Chrome, since it is blocking UI.
I would like to use an Offscreen Canvas to perform that conversion in background.
But :
1) I don't know which Offscreen Canvas I should use :
a] new OffscreenCanvas()
b] canvas.transferControlToOffscreen()
2) I load a local image url in an Image object (img.src = url) to get width and height of the local image. But I don't understand how to transfert the Image object to the offscreen Canvas, to be able to do in the worker :
ctx.drawImage(img, 0, 0)
Because If I don't transfert the image, worker doesn't know img.
You are facing an XY and even -Z problem here, but each may have an useful answer, so let's dig in.
X. Do not use the canvas API to perform image format conversion.
The canvas API is lossy, whatever you do, you will loose information from your original image, even if you do pass it lossless images, the image drawn on the canvas will not be the same as this original image.
If you pass an already lossy format like JPEG, it will even add information that were not in the original image: the compression artifacts are now part of the raw bitmap, and export algo will treat these as information it should keep, making your file probably bigger than the JPEG file you fed it with.
Not knowing your use case, it's a bit hard to give you the perfect advice, but generally, make the different formats from the version the closest to the raw image, and once it's painted in a browser, you are already at least three steps too late.
Now, if you do some processing on this image, you may indeed want to export the results.
But you probably don't need this Web Worker here.
Y. What takes the biggest blocking time in your description should be the synchronous toDataURL() call.
Instead of this historical error in the API, you should always be using the asynchronous and nonetheless more performant toBlob() method. In 99% of the cases, you don't need a data URL anyway, almost all you want to do with a data URL should be done with a Blob directly.
Using this method, the only heavy synchronous operation remaining would be the painting on canvas, and unless you are downsizing some huge images, this should not take the 400ms.
But you can anyway make it even better on newest canvas thanks to createImageBitmap method, which allows you to prepare asynchronously your image so that the image's decoding be complete and all that needs to be done is really just a put pixels operation:
large.onclick = e => process('https://upload.wikimedia.org/wikipedia/commons/c/cf/Black_hole_-_Messier_87.jpg');
medium.onclick = e => process('https://upload.wikimedia.org/wikipedia/commons/thumb/c/cf/Black_hole_-_Messier_87.jpg/1280px-Black_hole_-_Messier_87.jpg');
function process(url) {
convertToWebp(url)
.then(prepareDownload)
.catch(console.error);
}
async function convertToWebp(url) {
if(!supportWebpExport())
console.warn("your browser doesn't support webp export, will default to png");
let img = await loadImage(url);
if(typeof window.createImageBitmap === 'function') {
img = await createImageBitmap(img);
}
const ctx = get2DContext(img.width, img.height);
console.time('only sync part');
ctx.drawImage(img, 0,0);
console.timeEnd('only sync part');
return new Promise((res, rej) => {
ctx.canvas.toBlob( blob => {
if(!blob) rej(ctx.canvas);
res(blob);
}, 'image/webp');
});
}
// some helpers
function loadImage(url) {
return new Promise((res, rej) => {
const img = new Image();
img.crossOrigin = 'anonymous';
img.src = url;
img.onload = e => res(img);
img.onerror = rej;
});
}
function get2DContext(width = 300, height=150) {
return Object.assign(
document.createElement('canvas'),
{width, height}
).getContext('2d');
}
function prepareDownload(blob) {
const a = document.createElement('a');
a.href = URL.createObjectURL(blob);
a.download = 'image.' + blob.type.replace('image/', '');
a.textContent = 'download';
document.body.append(a);
}
function supportWebpExport() {
return get2DContext(1,1).canvas
.toDataURL('image/webp')
.indexOf('image/webp') > -1;
}
<button id="large">convert large image (7,416 × 4,320 pixels)</button>
<button id="medium">convert medium image (1,280 × 746 pixels)</button>
Z. To draw an image on an OffscreenCanvas from a Web Worker, you will need the createImageBitmap mentioned above. Indeed, the ImageBitmap object produced by this method is the only image source value that drawImage() and texImage2D()(*) can accept which is available in Workers (all other being DOM Elements).
This ImageBitmap is transferable, so you could generate it from the main thread and then send it to you Worker with no memory cost:
main.js
const img = new Image();
img.onload = e => {
createImageBitmap(img).then(bmp => {
// transfer it to your worker
worker.postMessage({
image: bmp // the key to retrieve it in `event.data`
},
[bmp] // transfer it
);
};
img.src = url;
An other solution is to fetch your image's data from the Worker directly, and to generate the ImageBitmap object from the fetched Blob:
worker.js
const blob = await fetch(url).then(r => r.blob());
const img = await createImageBitmap(blob);
ctx.drawImage(img,0,0);
And note if you got the original image in your main's page as a Blob (e.g from an <input type="file">), then don't even go the way of the HTMLImageElement, nor of the fetching, directly send this Blob and generate the ImageBitmap from it.
*texImage2D actually accepts more source image formats, such as TypedArrays, and ImageData objects, but these TypedArrays should represent the pixel data, just like an ImageData does, and in order to have this pixel data, you probably need to already have drawn the image somewhere using one of the other image source formats.