I'm rewritng a small javascript for being able to put it in a worker.js like it is documented here:
Mozilla - Web_Workers_API
The worker.js shall display an image on an OffscreenCanvas like it is documented here:
Mozilla - OfscreenCanvas documentation
The initial script is using the following statement that obviously cannot be used in a worker.js file, because there is no "document":
var imgElement = document.createElement("img");
imgElement.src = canvas.toDataURL("image/png");
But how can I substitue the
document.createElement("img");
statement in the worker.js for still being able to use the second statement:
imgElement.src = canvas.toDataURL("image/png");
If anyone has any idea, it would be really appreciated. :)
Just don't.
Instead of exporting the canvas content and make the browser decode that image only to display it, simply display the HTMLCanvasElement directly.
This advice already stood for before you switched to an OffscreenCanvas, but it still does.
Then how to draw on an OffscreenCanvas in a Worker and still display it? I hear you ask.
Well, you can request an OffscreenCanvas from an HTMLCanvasElement through its transferControlToOffscreen() method.
So the way to go is, in the UI thread, you genereate the <canvas> element that will be used for displaying the image, and you generate an OffscreenCanvas from it. Then you start your Worker to which you'll transfer the OffscreenCanvas.
In the Worker you'll wait for the OffscreenCanvas in the onmessage event and grab the context and draw on it.
UI thread
const canvas = document.createElement("canvas");
const offscreen = canvas.transferControlToOffscreen();
const worker = new Worker(url);
worker.postMessage(offscreen, [offscreen]);
container.append(canvas);
Worker thread
onmessage = (evt) => {
const canvas = evt.data;
const ctx = canvas.getContext(ctx_type);
//...
All the drawings made from the Worker will get painted on the visible canvas, without blocking the UI thread at all.
const canvas = document.querySelector("canvas");
const offscreen = canvas.transferControlToOffscreen();
const worker = new Worker(getWorkerURL());
worker.postMessage(offscreen, [offscreen]);
function getWorkerURL() {
const worker_script = `
onmessage = (evt) => {
const canvas = evt.data;
const w = canvas.width = 500;
const h = canvas.height = 500;
const ctx = canvas.getContext("2d");
// draw some noise
const img = new ImageData(w,h);
const arr = new Uint32Array(img.data.buffer);
for( let i=0; i<arr.length; i++ ) {
arr[i] = Math.random() * 0xFFFFFFFF;
}
ctx.putImageData(img, 0, 0);
for( let i = 0; i < 500; i++ ) {
ctx.arc( Math.random() * w, Math.random() * h, Math.random() * 20, 0, Math.PI*2 );
ctx.closePath();
}
ctx.globalCompositeOperation = "xor";
ctx.fill();
};
`;
const blob = new Blob( [ worker_script ] );
return URL.createObjectURL( blob );
}
canvas { border: 1px solid; }
<canvas></canvas>
I have the following function that is able to generate a thumbnail from a video:
async function getThumbnailForVideo(videoUrl) {
const video = document.createElement("video");
const canvas = document.createElement("canvas");
video.style.display = "none";
canvas.style.display = "none";
// Trigger video load
await new Promise((resolve, reject) => {
video.addEventListener("loadedmetadata", () => {
video.width = video.videoWidth;
video.height = video.videoHeight;
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
// Seek the video to 25%
video.currentTime = video.duration * 0.25;
});
video.addEventListener("seeked", () => resolve());
video.src = videoUrl;
});
// Draw the thumbnail
canvas
.getContext("2d")
.drawImage(video, 0, 0, video.videoWidth, video.videoHeight);
const imageUrl = canvas.toDataURL("image/png");
return imageUrl;
}
Paired with URL.createObjectURL, I am able to generate a thumbnail from a user-selected video file. I have created the following test project on StackBlitz for testing: App Editor App Preview
While this seems to work fine for Chrome and Safari, it seems that Firefox does not respect the EXIF information of a video and as such draws it incorrectly.
The MDN documentation for CanvasRenderingContext2D.drawImage explicitly states that:
drawImage() will ignore all EXIF metadata in images, including the Orientation.. You should detect the Orientation yourself and use rotate() to make it right.
Modernizr hints at a solution via its exiforientation feature detection should I be able to read the rotation data from the file such that I only need to perform the extra transformations on Firefox.
I'm curious, is there a more idempotent solution to drawing an image from a HTMLVideoElement on all browsers?
So it turns out that the Modernizr exiforientation test only checks if an img element respects the EXIF data of an image, but not if the same image drawn onto a canvas is rendered correctly.
I set out instead to create my own test by drawing a known video on the canvas and testing it. I created the video as so using ffmpeg:
ffmpeg -filter_complex \
"color=color=#ffffff:duration=1us:size=4x4[bg]; \
color=color=#ff0000:duration=1us:size=2x2[r]; \
color=color=#00ff00:duration=1us:size=2x2[g]; \
color=color=#0000ff:duration=1us:size=2x2[b]; \
[bg][r]overlay=x=2:y=0:format=rgb:alpha=premultiplied[bg+r]; \
[bg+r][g]overlay=x=0:y=2:format=rgb:alpha=premultiplied[bg+r+g]; \
[bg+r+g][b]overlay=x=2:y=2:format=rgb:alpha=premultiplied[bg+r+g+b]" \
-map "[bg+r+g+b]" \
-y wrgb-0.mp4
ffmpeg -i wrgb-0.mp4 -c copy -metadata:s:v:0 rotate=180 -y wrgb-180.mp4
Using the same demo, I can see that Chrome and Firefox generate different video previews on the canvas.
Chrome: , blue, green, red, white
Firefox: , white, red, green, blue
Next, I just needed a function such that given an array of RGBA values from the canvas it would spit out the pattern on the canvas:
function getColourPattern(rgbaData) {
let pattern = "";
for (let i = 0; i < rgbaData.length; i += 4) {
const r = rgbaData[i] / 255;
const g = rgbaData[i + 1] / 255;
const b = rgbaData[i + 2] / 255;
const w = (r + g + b) / 3;
if (w > 0.9) {
pattern += "w";
continue;
}
switch (Math.max(r, g, b)) {
case r:
pattern += "r";
break;
case g:
pattern += "g";
break;
case b:
pattern += "b";
break;
}
}
return pattern;
}
This returns bbggbbggrrwwrrww on Chrome & Safari, and wwrrwwrrggbbggbb on Firefox (with canvas fingerprinting turned off)
I then used basenc --base64 wrgb-180.mp4 -w 0 to get a base64 representation of the video so that I could embed it into a single test function:
export async function canvasUsesEXIF() {
const videoUrl = `data:video/mp4;base64,AAAAIGZ0eXBpc29tAAACAGlzb21pc28yYXZjMW1wNDEAAAAIZnJlZQAAAvxtZGF0AAACrgYF//+q3EXpvebZSLeWLNgg2SPu73gyNjQgLSBjb3JlIDE1OSByMjk5OSAyOTY0OTRhIC0gSC4yNjQvTVBFRy00IEFWQyBjb2RlYyAtIENvcHlsZWZ0IDIwMDMtMjAyMCAtIGh0dHA6Ly93d3cudmlkZW9sYW4ub3JnL3gyNjQuaHRtbCAtIG9wdGlvbnM6IGNhYmFjPTEgcmVmPTMgZGVibG9jaz0xOjA6MCBhbmFseXNlPTB4MzoweDExMyBtZT1oZXggc3VibWU9NyBwc3k9MSBwc3lfcmQ9MS4wMDowLjAwIG1peGVkX3JlZj0xIG1lX3JhbmdlPTE2IGNocm9tYV9tZT0xIHRyZWxsaXM9MSA4eDhkY3Q9MSBjcW09MCBkZWFkem9uZT0yMSwxMSBmYXN0X3Bza2lwPTEgY2hyb21hX3FwX29mZnNldD0tMiB0aHJlYWRzPTEgbG9va2FoZWFkX3RocmVhZHM9MSBzbGljZWRfdGhyZWFkcz0wIG5yPTAgZGVjaW1hdGU9MSBpbnRlcmxhY2VkPTAgYmx1cmF5X2NvbXBhdD0wIGNvbnN0cmFpbmVkX2ludHJhPTAgYmZyYW1lcz0zIGJfcHlyYW1pZD0yIGJfYWRhcHQ9MSBiX2JpYXM9MCBkaXJlY3Q9MSB3ZWlnaHRiPTEgb3Blbl9nb3A9MCB3ZWlnaHRwPTIga2V5aW50PTI1MCBrZXlpbnRfbWluPTI1IHNjZW5lY3V0PTQwIGludHJhX3JlZnJlc2g9MCByY19sb29rYWhlYWQ9NDAgcmM9Y3JmIG1idHJlZT0xIGNyZj0yMy4wIHFjb21wPTAuNjAgcXBtaW49MCBxcG1heD02OSBxcHN0ZXA9NCBpcF9yYXRpbz0xLjQwIGFxPTE6MS4wMACAAAAAPmWIhAAt/9pbuD7Z/gvI3kF2QzYeJnVbANgW8XnGVlnoDJNW7zJawMem6POfQ3cvmVl9l7mrZDdjuR26xB2/AAADAm1vb3YAAABsbXZoZAAAAAAAAAAAAAAAAAAAA+gAAAAoAAEAAAEAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAIsdHJhawAAAFx0a2hkAAAAAwAAAAAAAAAAAAAAAQAAAAAAAAAoAAAAAAAAAAAAAAAAAAAAAP//AAAAAAAAAAAAAAAAAAD//wAAAAAAAAAAAAAAAAAAQAAAAAAEAAAABAAAAAAAJGVkdHMAAAAcZWxzdAAAAAAAAAABAAAAKAAAAAAAAQAAAAABpG1kaWEAAAAgbWRoZAAAAAAAAAAAAAAAAAAAMgAAAAIAVcQAAAAAAC1oZGxyAAAAAAAAAAB2aWRlAAAAAAAAAAAAAAAAVmlkZW9IYW5kbGVyAAAAAU9taW5mAAAAFHZtaGQAAAABAAAAAAAAAAAAAAAkZGluZgAAABxkcmVmAAAAAAAAAAEAAAAMdXJsIAAAAAEAAAEPc3RibAAAAKtzdHNkAAAAAAAAAAEAAACbYXZjMQAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAEAAQASAAAAEgAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABj//wAAADVhdmNDAWQACv/hABhnZAAKrNlfnnwEQAAAAwBAAAAMg8SJZYABAAZo6+PLIsD9+PgAAAAAEHBhc3AAAAABAAAAAQAAABhzdHRzAAAAAAAAAAEAAAABAAACAAAAABxzdHNjAAAAAAAAAAEAAAABAAAAAQAAAAEAAAAUc3RzegAAAAAAAAL0AAAAAQAAABRzdGNvAAAAAAAAAAEAAAAwAAAAYnVkdGEAAABabWV0YQAAAAAAAAAhaGRscgAAAAAAAAAAbWRpcmFwcGwAAAAAAAAAAAAAAAAtaWxzdAAAACWpdG9vAAAAHWRhdGEAAAABAAAAAExhdmY1OC40NS4xMDA=`;
const video = document.createElement("video");
const canvas = document.createElement("canvas");
video.style.display = "none";
canvas.style.display = "none";
await new Promise((resolve, reject) => {
video.addEventListener("canplay", () => {
video.width = video.videoWidth;
video.height = video.videoHeight;
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
video.currentTime = 0;
});
video.addEventListener("seeked", () => resolve());
video.src = videoUrl;
});
const context = canvas.getContext("2d");
context.drawImage(video, 0, 0, video.videoWidth, video.videoHeight);
const { data } = context.getImageData(0, 0, 4, 4);
return getColourPattern(data) === "bbggbbggrrwwrrww";
}
Now assuming you have the rotation metadata of the video, you should be able to test if you need to rotate it on the canvas manually 🤓
Edit 1:
This should fix Firefox on Windows from throwing a NS_ERROR_NOT_AVAILABLE error.
9c9
< video.addEventListener("loadedmetadata", () => {
---
> video.addEventListener("canplay", () => {
I set up a simple Node.js server to serve a .wav file to my local frontend.
require('dotenv').config();
const debugBoot = require('debug')('boot');
const cors = require('cors')
const express = require('express');
const app = express();
app.set('port', process.env.PORT || 3000);
app.use(cors());
app.use(express.static('public'));
const server = app.listen(app.get('port'), () => {
const port = server.address().port;
debugBoot('Server running at http://localhost:' + port);
});
On my local frontend I receive the file:
fetch('http://localhost:3000/audio/8bars60bpmOnlyKick.wav').then(response => process(response.body))
function process(stream) {
console.log(stream);
const context = new AudioContext();
const analyser = context.createAnalyser();
const source = context.createMediaStreamSource(stream);
source.connect(analyser);
const data = new Uint8Array(analyser.frequencyBinCount);
I want to pipe the stream into AudioContext().createMediaStreamSource. I could do this with a Media Stream, e.g., from the microphone.
But with the ReadableStream, I get the error Failed to execute 'createMediaStreamSource' on 'AudioContext': parameter 1 is not of type 'MediaStream'.
I want to serve/receive the audio in a way that I can plug it into the web-audio API and use the analyzer. It wouldn't need to be a stream if there is a nother solution.
I merged basically those both examples together:
https://www.youtube.com/watch?v=hYNJGPnmwls (https://codepen.io/jakealbaugh/pen/jvQweW/)
and the example from the web-audio api:
https://github.com/mdn/webaudio-examples/blob/master/audio-analyser/index.html
let audioBuffer;
let sourceNode;
let analyserNode;
let javascriptNode;
let audioData = null;
let audioPlaying = false;
let sampleSize = 1024; // number of samples to collect before analyzing data
let frequencyDataArray; // array to hold time domain data
// Global Variables for the Graphics
let canvasWidth = 512;
let canvasHeight = 256;
let ctx;
document.addEventListener("DOMContentLoaded", function () {
ctx = document.body.querySelector('canvas').getContext("2d");
// the AudioContext is the primary 'container' for all your audio node objects
try {
audioContext = new AudioContext();
} catch (e) {
alert('Web Audio API is not supported in this browser');
}
// When the Start button is clicked, finish setting up the audio nodes, play the sound,
// gather samples for the analysis, update the canvas
document.body.querySelector('#start_button').addEventListener('click', function (e) {
e.preventDefault();
// Set up the audio Analyser, the Source Buffer and javascriptNode
initCanvas();
setupAudioNodes();
javascriptNode.onaudioprocess = function () {
// get the Time Domain data for this sample
analyserNode.getByteFrequencyData(frequencyDataArray);
// draw the display if the audio is playing
console.log(frequencyDataArray)
draw();
};
loadSound();
});
document.body.querySelector("#stop_button").addEventListener('click', function(e) {
e.preventDefault();
sourceNode.stop(0);
audioPlaying = false;
});
function loadSound() {
fetch('http://localhost:3000/audio/8bars60bpmOnlyKick.wav').then(response => {
response.arrayBuffer().then(function (buffer) {
audioContext.decodeAudioData(buffer).then((audioBuffer) => {
console.log('audioBuffer', audioBuffer);
// {length: 1536000, duration: 32, sampleRate: 48000, numberOfChannels: 2}
audioData = audioBuffer;
playSound(audioBuffer);
});
});
})
}
function setupAudioNodes() {
sourceNode = audioContext.createBufferSource();
analyserNode = audioContext.createAnalyser();
analyserNode.fftSize = 4096;
javascriptNode = audioContext.createScriptProcessor(sampleSize, 1, 1);
// Create the array for the data values
frequencyDataArray = new Uint8Array(analyserNode.frequencyBinCount);
// Now connect the nodes together
sourceNode.connect(audioContext.destination);
sourceNode.connect(analyserNode);
analyserNode.connect(javascriptNode);
javascriptNode.connect(audioContext.destination);
}
function initCanvas() {
ctx.fillStyle = 'hsl(280, 100%, 10%)';
ctx.fillRect(0, 0, canvasWidth, canvasHeight);
};
// Play the audio once
function playSound(buffer) {
sourceNode.buffer = buffer;
sourceNode.start(0); // Play the sound now
sourceNode.loop = false;
audioPlaying = true;
}
function draw() {
const data = frequencyDataArray;
const dataLength = frequencyDataArray.length;
console.log("data", data);
const h = canvasHeight / dataLength;
// draw on the right edge
const x = canvasWidth - 1;
// copy the old image and move one left
let imgData = ctx.getImageData(1, 0, canvasWidth - 1, canvasHeight);
ctx.fillRect(0, 0, canvasWidth, canvasHeight);
ctx.putImageData(imgData, 0, 0);
for (let i = 0; i < dataLength; i++) {
// console.log(data)
let rat = data[i] / 255;
let hue = Math.round((rat * 120) + 280 % 360);
let sat = '100%';
let lit = 10 + (70 * rat) + '%';
// console.log("rat %s, hue %s, lit %s", rat, hue, lit);
ctx.beginPath();
ctx.strokeStyle = `hsl(${hue}, ${sat}, ${lit})`;
ctx.moveTo(x, canvasHeight - (i * h));
ctx.lineTo(x, canvasHeight - (i * h + h));
ctx.stroke();
}
}
});
I explain shortly what each part does:
creating audio context
When the DOM loads the AudioContext is created.
loading the audio file and converting it to AudioBuffer
Then I load the sound from by backend server (the code is as shown above). The response is then converted to a buffer which is then decoded to an AudioBuffer. This is basically the main solution for the question above.
Process AudioBuffer
To show a little bit more context how to use the loaded audio file I included the rest of the file.
To further process the AudioBuffer a source is created and the buffer is assigned to the source: sourceNode.buffer = buffer.
The javascriptNode acts IMHO like a stream where you can access the output of the analyzer.
Is it possible to capture or print what's displayed in an HTML canvas as an image or PDF?
I'd like to generate an image via canvas and be able to generate a PNG from that image.
Original answer was specific to a similar question. This has been revised:
const canvas = document.getElementById('mycanvas')
const img = canvas.toDataURL('image/png')
With the value in img you can write it out as a new image like so:
document.getElementById('existing-image-id').src = img
or
document.write('<img src="'+img+'"/>');
HTML5 provides Canvas.toDataURL(mimetype) which is implemented in Opera, Firefox, and Safari 4 beta. There are a number of security restrictions, however (mostly to do with drawing content from another origin onto the canvas).
So you don't need an additional library.
e.g.
<canvas id=canvas width=200 height=200></canvas>
<script>
window.onload = function() {
var canvas = document.getElementById("canvas");
var context = canvas.getContext("2d");
context.fillStyle = "green";
context.fillRect(50, 50, 100, 100);
// no argument defaults to image/png; image/jpeg, etc also work on some
// implementations -- image/png is the only one that must be supported per spec.
window.location = canvas.toDataURL("image/png");
}
</script>
Theoretically this should create and then navigate to an image with a green square in the middle of it, but I haven't tested.
I thought I'd extend the scope of this question a bit, with some useful tidbits on the matter.
In order to get the canvas as an image, you should do the following:
var canvas = document.getElementById("mycanvas");
var image = canvas.toDataURL("image/png");
You can use this to write the image to the page:
document.write('<img src="'+image+'"/>');
Where "image/png" is a mime type (png is the only one that must be supported). If you would like an array of the supported types you can do something along the lines of this:
var imageMimes = ['image/png', 'image/bmp', 'image/gif', 'image/jpeg', 'image/tiff']; //Extend as necessary
var acceptedMimes = new Array();
for(i = 0; i < imageMimes.length; i++) {
if(canvas.toDataURL(imageMimes[i]).search(imageMimes[i])>=0) {
acceptedMimes[acceptedMimes.length] = imageMimes[i];
}
}
You only need to run this once per page - it should never change through a page's lifecycle.
If you wish to make the user download the file as it is saved you can do the following:
var canvas = document.getElementById("mycanvas");
var image = canvas.toDataURL("image/png").replace("image/png", "image/octet-stream"); //Convert image to 'octet-stream' (Just a download, really)
window.location.href = image;
If you're using that with different mime types, be sure to change both instances of image/png, but not the image/octet-stream.
It is also worth mentioning that if you use any cross-domain resources in rendering your canvas, you will encounter a security error when you try to use the toDataUrl method.
function exportCanvasAsPNG(id, fileName) {
var canvasElement = document.getElementById(id);
var MIME_TYPE = "image/png";
var imgURL = canvasElement.toDataURL(MIME_TYPE);
var dlLink = document.createElement('a');
dlLink.download = fileName;
dlLink.href = imgURL;
dlLink.dataset.downloadurl = [MIME_TYPE, dlLink.download, dlLink.href].join(':');
document.body.appendChild(dlLink);
dlLink.click();
document.body.removeChild(dlLink);
}
I would use "wkhtmltopdf". It just work great. It uses webkit engine (used in Chrome, Safari, etc.), and it is very easy to use:
wkhtmltopdf stackoverflow.com/questions/923885/ this_question.pdf
That's it!
Try it
Here is some help if you do the download through a server (this way you can name/convert/post-process/etc your file):
-Post data using toDataURL
-Set the headers
$filename = "test.jpg"; //or png
header('Content-Description: File Transfer');
if($msie = !strstr($_SERVER["HTTP_USER_AGENT"],"MSIE")==false)
header("Content-type: application/force-download");else
header("Content-type: application/octet-stream");
header("Content-Disposition: attachment; filename=\"$filename\"");
header("Content-Transfer-Encoding: binary");
header("Expires: 0"); header("Cache-Control: must-revalidate");
header("Pragma: public");
-create image
$data = $_POST['data'];
$img = imagecreatefromstring(base64_decode(substr($data,strpos($data,',')+1)));
-export image as JPEG
$width = imagesx($img);
$height = imagesy($img);
$output = imagecreatetruecolor($width, $height);
$white = imagecolorallocate($output, 255, 255, 255);
imagefilledrectangle($output, 0, 0, $width, $height, $white);
imagecopy($output, $img, 0, 0, 0, 0, $width, $height);
imagejpeg($output);
exit();
-or as transparent PNG
imagesavealpha($img, true);
imagepng($img);
die($img);
This is the other way, without strings although I don't really know if it's faster or not. Instead of toDataURL (as all questions here propose). In my case want to prevent dataUrl/base64 since I need a Array buffer or view. So the other method in HTMLCanvasElement is toBlob. (TypeScript function):
export function canvasToArrayBuffer(canvas: HTMLCanvasElement, mime: string): Promise<ArrayBuffer> {
return new Promise((resolve, reject) => canvas.toBlob(async (d) => {
if (d) {
const r = new FileReader();
r.addEventListener('loadend', e => {
const ab = r.result;
if (ab) {
resolve(ab as ArrayBuffer);
}
else {
reject(new Error('Expected FileReader result'));
}
}); r.addEventListener('error', e => {
reject(e)
});
r.readAsArrayBuffer(d);
}
else {
reject(new Error('Expected toBlob() to be defined'));
}
}, mime));
}
Another advantage of blobs is you can create ObjectUrls to represent data as files, similar to HTMLInputFile's 'files' member. More info:
https://developer.mozilla.org/en/docs/Web/API/HTMLCanvasElement/toBlob
Another interesting solution is PhantomJS.
It's a headless WebKit scriptable with JavaScript or CoffeeScript.
One of the use case is screen capture : you can programmatically capture web contents, including SVG and Canvas and/or Create web site screenshots with thumbnail preview.
The best entry point is the screen capture wiki page.
Here is a good example for polar clock (from RaphaelJS):
>phantomjs rasterize.js http://raphaeljs.com/polar-clock.html clock.png
Do you want to render a page to a PDF ?
> phantomjs rasterize.js 'http://en.wikipedia.org/w/index.php?title=Jakarta&printable=yes' jakarta.pdf
If you are using jQuery, which quite a lot of people do, then you would implement the accepted answer like so:
var canvas = $("#mycanvas")[0];
var img = canvas.toDataURL("image/png");
$("#elememt-to-write-to").html('<img src="'+img+'"/>');
The key point is
canvas.toDataURL(type, quality)
And I want to provide an example for someone like me who wants to save SVG to PNG(also can add some text if you wish), which may be from an Online source or font-awesome icon, etc.
Example
100% javascript and no other 3-rd library.
<script>
(() => {
window.onload = () => {
// Test 1: SVG from Online
const canvas = new Canvas(650, 500)
// canvas.DrawGrid() // If you want to show grid, you can use it.
const svg2img = new SVG2IMG(canvas.canvas, "https://upload.wikimedia.org/wikipedia/commons/b/bd/Test.svg")
svg2img.AddText("Hello", 100, 250, {mode: "fill", color: "yellow", alpha: 0.8})
svg2img.AddText("world", 200, 250, {mode: "stroke", color: "red"})
svg2img.AddText("!", 280, 250, {color: "#f700ff", size: "72px"})
svg2img.Build("Test.png")
// Test 2: URI.data
const canvas2 = new Canvas(180, 180)
const uriData = "data:image/svg+xml;base64,PHN2ZyBjbGFzcz0ic3ZnLWlubGluZS0tZmEgZmEtc21pbGUtd2luayBmYS13LTE2IiBhcmlhLWhpZGRlbj0idHJ1ZSIgZm9jdXNhYmxlPSJmYWxzZSIgZGF0YS1wcmVmaXg9ImZhciIgZGF0YS1pY29uPSJzbWlsZS13aW5rIiByb2xlPSJpbWciIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgdmlld0JveD0iMCAwIDQ5NiA1MTIiIGRhdGEtZmEtaTJzdmc9IiI+PHBhdGggZmlsbD0iY3VycmVudENvbG9yIiBkPSJNMjQ4IDhDMTExIDggMCAxMTkgMCAyNTZzMTExIDI0OCAyNDggMjQ4IDI0OC0xMTEgMjQ4LTI0OFMzODUgOCAyNDggOHptMCA0NDhjLTExMC4zIDAtMjAwLTg5LjctMjAwLTIwMFMxMzcuNyA1NiAyNDggNTZzMjAwIDg5LjcgMjAwIDIwMC04OS43IDIwMC0yMDAgMjAwem0xMTcuOC0xNDYuNGMtMTAuMi04LjUtMjUuMy03LjEtMzMuOCAzLjEtMjAuOCAyNS01MS41IDM5LjQtODQgMzkuNHMtNjMuMi0xNC4zLTg0LTM5LjRjLTguNS0xMC4yLTIzLjctMTEuNS0zMy44LTMuMS0xMC4yIDguNS0xMS41IDIzLjYtMy4xIDMzLjggMzAgMzYgNzQuMSA1Ni42IDEyMC45IDU2LjZzOTAuOS0yMC42IDEyMC45LTU2LjZjOC41LTEwLjIgNy4xLTI1LjMtMy4xLTMzLjh6TTE2OCAyNDBjMTcuNyAwIDMyLTE0LjMgMzItMzJzLTE0LjMtMzItMzItMzItMzIgMTQuMy0zMiAzMiAxNC4zIDMyIDMyIDMyem0xNjAtNjBjLTI1LjcgMC01NS45IDE2LjktNTkuOSA0Mi4xLTEuNyAxMS4yIDExLjUgMTguMiAxOS44IDEwLjhsOS41LTguNWMxNC44LTEzLjIgNDYuMi0xMy4yIDYxIDBsOS41IDguNWM4LjUgNy40IDIxLjYuMyAxOS44LTEwLjgtMy44LTI1LjItMzQtNDIuMS01OS43LTQyLjF6Ij48L3BhdGg+PC9zdmc+"
const svg2img2 = new SVG2IMG(canvas2.canvas, uriData)
svg2img2.Build("SmileWink.png")
// Test 3: Exists SVG
ImportFontAwesome()
const range = document.createRange()
const fragSmile = range.createContextualFragment(`<i class="far fa-smile" style="background-color:black;color:yellow"></i>`)
document.querySelector(`body`).append(fragSmile)
// use MutationObserver wait the fontawesome convert ``<i class="far fa-smile"></i>`` to SVG. If you write the element in the HTML, then you can skip this hassle way.
const observer = new MutationObserver((mutationRecordList, observer) => {
for (const mutation of mutationRecordList) {
switch (mutation.type) {
case "childList":
const targetSVG = mutation.target.querySelector(`svg`)
if (targetSVG !== null) {
const canvas3 = new Canvas(64, 64) // 👈 Focus here. The part of the observer is not important.
const svg2img3 = new SVG2IMG(canvas3.canvas, SVG2IMG.Convert2URIData(targetSVG))
svg2img3.Build("Smile.png")
targetSVG.remove() // This SVG is created by font-awesome, and it's an extra element. I don't want to see it.
observer.disconnect()
return
}
}
}
})
observer.observe(document.querySelector(`body`), {childList: true})
}
})()
class SVG2IMG {
/**
* #param {HTMLCanvasElement} canvas
* #param {string} src "http://.../xxx.svg" or "data:image/svg+xml;base64,${base64}"
* */
constructor(canvas, src) {
this.canvas = canvas;
this.context = this.canvas.getContext("2d")
this.src = src
this.addTextList = []
}
/**
* #param {HTMLElement} node
* #param {string} mediaType: https://en.wikipedia.org/wiki/Media_type#Common_examples_%5B10%5D
* #see https://en.wikipedia.org/wiki/List_of_URI_schemes
* */
static Convert2URIData(node, mediaType = 'data:image/svg+xml') {
const base64 = btoa(node.outerHTML)
return `${mediaType};base64,${base64}`
}
/**
* #param {string} text
* #param {int} x
* #param {int} y
* #param {"stroke"|"fill"} mode
* #param {string} size, "30px"
* #param {string} font, example: "Arial"
* #param {string} color, example: "#3ae016" or "yellow"
* #param {int} alpha, 0.0 (fully transparent) to 1.0 (fully opaque) // https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Tutorial/Applying_styles_and_colors#transparency
* */
AddText(text, x, y, {mode = "fill", size = "32px", font = "Arial", color = "black", alpha = 1.0}) {
const drawFunc = (text, x, y, mode, font) => {
return () => {
// https://www.w3schools.com/graphics/canvas_text.asp
// https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/fillText
const context = this.context
const originAlpha = context.globalAlpha
context.globalAlpha = alpha
context.font = `${size} ${font}`
switch (mode) {
case "fill":
context.fillStyle = color
context.fillText(text, x, y)
break
case "stroke":
context.strokeStyle = color
context.strokeText(text, x, y)
break
default:
throw Error(`Unknown mode:${mode}`)
}
context.globalAlpha = originAlpha
}
}
this.addTextList.push(drawFunc(text, x, y, mode, font))
}
/**
* #description When the build is finished, you can click the filename to download the PNG or mouse enters to copy PNG to the clipboard.
* */
Build(filename = "download.png") {
const img = new Image()
img.src = this.src
img.crossOrigin = "anonymous" // Fixes: Tainted canvases may not be exported
img.onload = (event) => {
this.context.drawImage(event.target, 0, 0)
for (const drawTextFunc of this.addTextList) {
drawTextFunc()
}
// create a "a" node for download
const a = document.createElement('a')
document.querySelector('body').append(a)
a.innerText = filename
a.download = filename
const quality = 1.0
// a.target = "_blank"
a.href = this.canvas.toDataURL("image/png", quality)
a.append(this.canvas)
}
this.canvas.onmouseenter = (event) => {
// set background to white. Otherwise, background-color is black.
this.context.globalCompositeOperation = "destination-over" // https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/globalCompositeOperation // https://www.w3schools.com/tags/canvas_globalcompositeoperation.asp
this.context.fillStyle = "rgb(255,255,255)"
this.context.fillRect(0, 0, this.canvas.width, this.canvas.height)
this.canvas.toBlob(blob => navigator.clipboard.write([new ClipboardItem({'image/png': blob})])) // copy to clipboard
}
}
}
class Canvas {
/**
* #description for do something like that: ``<canvas width="" height=""></>canvas>``
**/
constructor(w, h) {
const canvas = document.createElement("canvas")
document.querySelector(`body`).append(canvas)
this.canvas = canvas;
[this.canvas.width, this.canvas.height] = [w, h]
}
/**
* #description If your SVG is large, you may want to know which part is what you wanted.
* */
DrawGrid(step = 100) {
const ctx = this.canvas.getContext('2d')
const w = this.canvas.width
const h = this.canvas.height
// Draw the vertical line.
ctx.beginPath();
for (let x = 0; x <= w; x += step) {
ctx.moveTo(x, 0);
ctx.lineTo(x, h);
}
// set the color of the line
ctx.strokeStyle = 'rgba(255,0,0, 0.5)'
ctx.lineWidth = 1
ctx.stroke();
// Draw the horizontal line.
ctx.beginPath();
for (let y = 0; y <= h; y += step) {
ctx.moveTo(0, y)
ctx.lineTo(w, y)
}
ctx.strokeStyle = 'rgba(128, 128, 128, 0.5)'
ctx.lineWidth = 5
ctx.stroke()
}
}
function ImportFontAwesome() {
const range = document.createRange()
const frag = range.createContextualFragment(`
<link rel="stylesheet" type="text/css" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.2/css/all.min.css" integrity="sha512-HK5fgLBL+xu6dm/Ii3z4xhlSUyZgTT9tuc/hSrtw6uzJOvgRr2a9jyxxT1ely+B+xFAmJKVSTbpM/CuL7qxO8w==" crossorigin="anonymous" />
<script src="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.2/js/all.min.js" integrity="sha512-UwcC/iaz5ziHX7V6LjSKaXgCuRRqbTp1QHpbOJ4l1nw2/boCfZ2KlFIqBUA/uRVF0onbREnY9do8rM/uT/ilqw==" crossorigin="anonymous"/>
`)
document.querySelector("head").append(frag)
}
</script>
if you want to run on stackoverflow and move your mouse on the picture may get error
DOMException: The Clipboard API has been blocked because of a permissions policy applied to the current document
You can copy the code on your local machine and run it again, will be fine.
upload image from <canvas />:
async function canvasToBlob(canvas) {
if (canvas.toBlob) {
return new Promise(function (resolve) {
canvas.toBlob(resolve)
})
} else {
throw new Error('canvas.toBlob Invalid')
}
}
await canvasToBlob(yourCanvasEl)
On some versions of Chrome, you can:
Use the draw image function ctx.drawImage(image1, 0, 0, w, h);Right-click on the canvas
You can use jspdf to capture a canvas into an image or pdf like this:
var imgData = canvas.toDataURL('image/png');
var doc = new jsPDF('p', 'mm');
doc.addImage(imgData, 'PNG', 10, 10);
doc.save('sample-file.pdf');
More info: https://github.com/MrRio/jsPDF
The simple answer is just to take the blob of it and set the img src to a new object URL of that blob, then add that image to a PDF using some library, like
var ok = document.createElement("canvas")
ok.width = 400
ok.height = 140
var ctx = ok.getContext("2d");
for(let k = 0; k < ok.height; k++)
(
k
%
Math.floor(
(
Math.random()
) *
10
)
==
0
) && (y => {
for(var i = 0; i < ok.width; i++) {
if(i % 25 == 0) {
ctx.globalAlpha = Math.random()
ctx.fillStyle = (
"rgb(" +
Math.random() * 255 + "," +
Math.random() * 255 + "," +
Math.random() * 255 + ")"
);
(wdth =>
ctx.fillRect(
Math.sin(
i * Math.PI / 180
) *
Math.random() *
ok.width,
Math.cos(
i * Math.PI / 180,
) * wdth + y,
wdth,
wdth
)
)(15)
}
}
})(k)
ok.toBlob(blob => {
k.src = URL.createObjectURL(blob)
})
<img id=k>
Alternatively, if you wanted to work with low-level byte data, you can get the raw bytes of the canvas, then, depending on the file spec, write the raw image data into the necessary bytes of the data. you just need to call ctx.getImageData(0, 0, ctx.canvas.widht, ctx.canvas.height) to get the raw image data, then based on the file specification, write it to that
if you want to emebed the canvas you can use this snippet
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<canvas id=canvas width=200 height=200></canvas>
<iframe id='img' width=200 height=200></iframe>
<script>
window.onload = function() {
var canvas = document.getElementById("canvas");
var context = canvas.getContext("2d");
context.fillStyle = "green";
context.fillRect(50, 50, 100, 100);
document.getElementById('img').src = canvas.toDataURL("image/jpeg");
console.log(canvas.toDataURL("image/jpeg"));
}
</script>
</body>
</html>