I want to convert image (png/jpeg) to ICO using javascript in frontend.
On searching the web, I came across this code on github: https://gist.github.com/twolfson/7656254 but unfortunately it uses fs module of nodejs (+ the code is very difficult to compehend).
Can someone tell guide me on what should I search/or a way through which I can convert png/jpeg to ico using javascript in frontend?
Alternates I have tried?
Used this repo: https://github.com/fiahfy/ico-convert but they use sharp and sharp isn't supported on client side
On googling, I got this Mozilla post, with examples, which provides the following code for conversion to ICO format (limited to Firefox browser only),
A way to convert a canvas to an ico (Mozilla only)
This uses -moz-parse to convert the canvas to ico. Windows XP doesn't
support converting from PNG to ico, so it uses bmp instead. A download
link is created by setting the download attribute. The value of the
download attribute is the name it will use as the file name.
The code:
var canvas = document.getElementById('canvas');
var d = canvas.width;
ctx = canvas.getContext('2d');
ctx.beginPath();
ctx.moveTo(d / 2, 0);
ctx.lineTo(d, d);
ctx.lineTo(0, d);
ctx.closePath();
ctx.fillStyle = 'yellow';
ctx.fill();
function blobCallback(iconName) {
return function(b) {
var a = document.createElement('a');
a.textContent = 'Download';
document.body.appendChild(a);
a.style.display = 'block';
a.download = iconName + '.ico';
a.href = window.URL.createObjectURL(b);
}
}
canvas.toBlob(blobCallback('passThisString'),
'image/vnd.microsoft.icon',
'-moz-parse-options:format=bmp;bpp=32'
);
Apart from that, I'd found no other ways to convert png/jpeg into ICO format. Alternatively, you can do the conversion on the server-side by using any of the following modules:
to-ico
image-to-icon
png-to-ico
Should you want to support every browser and have only a PNG image, the .ICO file format supports embedded PNG images as long as they are smaller than 256x256. Based on the ICO file format, I've been able to construct an ICO using a small PNG image and a hex editor. This can be replicated in JavaScript. This is my testing image file:
To convert it into an ICO, I prepended the following hex data, little-endian encoded (the bytes in the values are reversed):
00 00 01 00 - File header. Says "This file is an ICO."
01 00 - There is one image in this file.
9C - This image is 0x9C pixels wide. **This should be variable**
77 - This image is 0x77 pixels tall. **This should be variable**
00 - There is not a limited color pallette.
00 - Reserved value.
01 00 - There is one color plane in this image.
18 00 - There are 0x18 bits per pixel (24 bits per pixel is standard RGB encoding)
8A 06 00 00 - This image is 0x0000068A large. **This should be variable**
16 00 00 00 - There were 0x16 bytes before this point.
[PNG data here]
This successfully created an ISO file from the PNG. You can create a simple JavaScript script for this prepending. Looking at the PNG specification, the first 8 bytes are a header, followed by 8 bytes of IHDR chunk metadata, which starts with a 4-byte little-endian width and a 4-byte little-endian height. This can be used in our script to discover the PNG's width and height. Something like:
function pngToIco(icoFile, pngData) {
icoFile = "\x00\x00\x01\x00\x01\x00"; // First 6 bytes are constant
icoFile += pngData[15+4]; // PNG width byte
icoFile += pngData[15+8]; // PNG height byte
// Make sure PNG is less than 256x256
if (pngData[15+1] || pngData[15+2] || pngData[15+3]) {
console.log("Width over 255!"); return;
}
if (pngData[15+5] || pngData[15+6] || pngData[15+7]) {
console.log("Height over 255!"); return;
}
// Add more (probably constant) information
icoFile += "\x00\x00\x01\x00\x18\x00";
// Add encoded length
var lenBytes = pngData.length;
for (var i=0; i<4; i++) {
icoFile += String.fromCharCode(lenBytes % 256);
lenBytes >>= 4;
}
// We had 0x16 bytes before now
icoFile += "\x16\x00\x00\x00";
// Now add the png data
icoFile += pngData;
// Now we have a valid ico file!
return icoFile;
}
Here's another solution if you're willing to bend the rules of this question being a JavaScript question. If your web browser supports WebAssembly (most modern browsers do), you could use a version of the well-known library ImageMagick cross-compiled into WebAssembly. Here is what I found: https://github.com/KnicKnic/WASM-ImageMagick
This library takes in image data from a sourceBytes buffer, and returns a transformed or converted image. According to the documentation, you can use it with a syntax similar to ImageMagick's terminal syntax, with a bit of extra code (copied from documentation and modified):
<script type='module'>
import * as Magick from 'https://knicknic.github.io/wasm-imagemagick/magickApi.js';
async function converPNGToIco(pngData) {
var icoData = await Magick.Call([{ 'name': 'srcFile.png', 'content': pngData }], ["convert", "srcFile.png", "-resize", "200x200", "outFile.ico"]);
// do stuff with icoData
}
</script>
Here is a pure JavaScript version (inspired by #id01) that converts an array of png image data (each images as an array of bytes) and returns an array of bytes for .ico file
function pngToIco( images )
{
let icoHead = [ //.ico header
0, 0, // Reserved. Must always be 0 (2 bytes)
1, 0, // Specifies image type: 1 for icon (.ICO) image, 2 for cursor (.CUR) image. Other values are invalid. (2 bytes)
images.length & 255, (images.length >> 8) & 255 // Specifies number of images in the file. (2 bytes)
],
icoBody = [],
pngBody = [];
for(let i = 0, num, pngHead, pngData, offset = 0; i < images.length; i++)
{
pngData = Array.from( images[i] );
pngHead = [ //image directory (16 bytes)
0, // Width 0-255, should be 0 if 256 pixels (1 byte)
0, // Height 0-255, should be 0 if 256 pixels (1 byte)
0, // Color count, should be 0 if more than 256 colors (1 byte)
0, // Reserved, should be 0 (1 byte)
1, 0, // Color planes when in .ICO format, should be 0 or 1, or the X hotspot when in .CUR format (2 bytes)
32, 0 // Bits per pixel when in .ICO format, or the Y hotspot when in .CUR format (2 bytes)
];
num = pngData.length;
for (let i = 0; i < 4; i++)
pngHead[pngHead.length] = ( num >> ( 8 * i )) & 255; // Size of the bitmap data in bytes (4 bytes)
num = icoHead.length + (( pngHead.length + 4 ) * images.length ) + offset;
for (let i = 0; i < 4; i++)
pngHead[pngHead.length] = ( num >> ( 8 * i )) & 255; // Offset in the file (4 bytes)
offset += pngData.length;
icoBody = icoBody.concat(pngHead); // combine image directory
pngBody = pngBody.concat(pngData); // combine actual image data
}
return icoHead.concat(icoBody, pngBody);
}
Example how to use by converting multiple canvas images into .ico:
function draw(canvas)
{
var ctx = canvas.getContext('2d');
ctx.beginPath();
var p = Math.min(canvas.width, canvas.height);
ctx.fillStyle = "yellow";
ctx.lineWidth = p / 16 / 1.5;
ctx.lineCap = "round";
ctx.arc(50*p/100, 50*p/100, 39*p/100, 0, Math.PI * 2, true); // Outer circle
ctx.save();
var radgrad = ctx.createRadialGradient(50*p/100,50*p/100,0,50*p/100,50*p/100,50*p/100);
radgrad.addColorStop(0, 'rgba(128,128,128,1)');
radgrad.addColorStop(0.3, 'rgba(128,128,128,.8)');
radgrad.addColorStop(0.5, 'rgba(128,128,128,.5)');
radgrad.addColorStop(0.8, 'rgba(128,128,128,.2)');
radgrad.addColorStop(1, 'rgba(128,128,128,0)');
ctx.fillStyle = radgrad;
ctx.fillRect(0, 0, p, p);
ctx.restore();
ctx.fill();
ctx.moveTo(77*p/100, 50*p/100);
ctx.arc(50*p/100, 50*p/100, 27*p/100, 0, Math.PI, false); // Mouth (clockwise)
ctx.moveTo(41*p/100, 42*p/100);
ctx.arc(38*p/100, 42*p/100, 3*p/100, 0, Math.PI * 2, true); // Left eye
ctx.moveTo(65*p/100, 42*p/100);
ctx.arc(62*p/100, 42*p/100, 3*p/100, 0, Math.PI * 2, true); // Right eye
ctx.stroke();
}
let images = [];//array of image data. each image as an array of bytes
for(let i= 0, a = [16,24,32,48,64,128,256,512,1024]; i < a.length; i++)
{
let canvas = document.createElement("canvas");
canvas.width = a[i];
canvas.height = a[i];
draw(canvas);
// Convert canvas to Blob, then Blob to ArrayBuffer.
canvas.toBlob(function (blob) {
const reader = new FileReader();
reader.addEventListener('loadend', () => {
images[i] = new Uint8Array(reader.result);
if (images.length == a.length)//all canvases converted to png
{
let icoData = pngToIco(images), //array of bytes
type = "image/x-ico",
blob = new Blob([new Uint8Array(icoData)], {type: type}),
a = document.getElementById("download");
a.download = "smile.ico";
a.href = window.URL.createObjectURL(blob);
a.dataset.downloadurl = [type, a.download, a.href].join(':');
document.getElementById("img").src = a.href;
}
});
reader.readAsArrayBuffer(blob);
}, "image/png");
document.body.appendChild(canvas);
}
function pngToIco( images )
{
let icoHead = [ //.ico header
0, 0, // Reserved. Must always be 0 (2 bytes)
1, 0, // Specifies image type: 1 for icon (.ICO) image, 2 for cursor (.CUR) image. Other values are invalid. (2 bytes)
images.length & 255, (images.length >> 8) & 255 // Specifies number of images in the file. (2 bytes)
],
icoBody = [],
pngBody = [];
for(let i = 0, num, pngHead, pngData, offset = 0; i < images.length; i++)
{
pngData = Array.from( images[i] );
pngHead = [ //image directory (16 bytes)
0, // Width 0-255, should be 0 if 256 pixels (1 byte)
0, // Height 0-255, should be 0 if 256 pixels (1 byte)
0, // Color count, should be 0 if more than 256 colors (1 byte)
0, // Reserved, should be 0 (1 byte)
1, 0, // Color planes when in .ICO format, should be 0 or 1, or the X hotspot when in .CUR format (2 bytes)
32, 0 // Bits per pixel when in .ICO format, or the Y hotspot when in .CUR format (2 bytes)
];
num = pngData.length;
for (let i = 0; i < 4; i++)
pngHead[pngHead.length] = ( num >> ( 8 * i )) & 255; // Size of the bitmap data in bytes (4 bytes)
num = icoHead.length + (( pngHead.length + 4 ) * images.length ) + offset;
for (let i = 0; i < 4; i++)
pngHead[pngHead.length] = ( num >> ( 8 * i )) & 255; // Offset in the file (4 bytes)
offset += pngData.length;
icoBody = icoBody.concat(pngHead); // combine image directory
pngBody = pngBody.concat(pngData); // combine actual image data
}
return icoHead.concat(icoBody, pngBody);
}
<a id="download">download image <img id="img" width="128"></a>
<br>
https://jsfiddle.net/vanowm/b657yksg/
I wrote an ES6 module that pack PNG files into ICO container: PNG2ICOjs. Everything works in modern browsers without any dependency.
import { PngIcoConverter } from "../src/png2icojs.js";
// ...
const inputs = [...files].map(file => ({
png: file
}));
// Result is a Blob
const resultBlob1 = await converter.convertToBlobAsync(inputs); // Default mime type is image/x-icon
const resultBlob2 = await converter.convertToBlobAsync(inputs, "image/your-own-mime");
// Result is an Uint8Array
const resultArr = await converter.convertAsync(inputs);
this should works
github url:https://github.com/egy186/icojs
<input type="file" id="input-file" />
<script>
document.getElementById('input-file').addEventListener('change', function (evt) {
// use FileReader for converting File object to ArrayBuffer object
var reader = new FileReader();
reader.onload = function (e) {
ICO.parse(e.target.result).then(function (images) {
// logs images
console.dir(images);
})
};
reader.readAsArrayBuffer(evt.target.files[0]);
}, false);
</script>
Fully working example.
Here is JSFiddle: click (use it because this site's Code snippet do not allow me to click on a[download] link), but other code is working in code snippet - you can open links in new tab to see it.
var MyBlobBuilder = function() {
this.parts = [];
}
MyBlobBuilder.prototype.append = function(part) {
this.parts.push(part);
this.blob = undefined; // Invalidate the blob
};
MyBlobBuilder.prototype.write = function(part) {
this.append(part);
}
MyBlobBuilder.prototype.getBlob = function(atype) {
if (!this.blob) {
this.blob = new Blob(this.parts, {
type: !atype ? "text/plain" : atype
});
}
return this.blob;
};
const img = document.getElementById('input'),
a = document.getElementById('a'),
a2 = document.getElementById('a2'),
file1 = document.getElementById('file');
let imgSize = 0,
imgBlob;
img.onload = e => {
fetch(img.src).then(resp => resp.blob())
.then(blob => {
imgBlob = blob;
imgSize = blob.size;
});
};
function convertToIco(imgSize, imgBlob) {
let file = new MyBlobBuilder(),
buff;
// Write out the .ico header [00, 00]
// Reserved space
buff = new Uint8Array([0, 0]).buffer;
file.write(buff, 'binary');
// Indiciate ico file [01, 00]
buff = new Uint8Array([1, 0]).buffer;
file.write(buff, 'binary');
// Indiciate 1 image [01, 00]
buff = new Uint8Array([1, 0]).buffer;
file.write(buff, 'binary');
// Image is 50 px wide [32]
buff = new Uint8Array([img.width < 256 ? img.width : 0]).buffer;
file.write(buff, 'binary');
// Image is 50 px tall [32]
buff = new Uint8Array([img.height < 256 ? img.height : 0]).buffer;
file.write(buff, 'binary');
// Specify no color palette [00]
// TODO: Not sure if this is appropriate
buff = new Uint8Array([0]).buffer;
file.write(buff, 'binary');
// Reserved space [00]
// TODO: Not sure if this is appropriate
buff = new Uint8Array([0]).buffer;
file.write(buff, 'binary');
// Specify 1 color plane [01, 00]
// TODO: Not sure if this is appropriate
buff = new Uint8Array([1, 0]).buffer;
file.write(buff, 'binary');
// Specify 32 bits per pixel (bit depth) [20, 00]
// TODO: Quite confident in this one
buff = new Uint8Array([32, 0]).buffer;
file.write(buff, 'binary');
// Specify image size in bytes
// DEV: Assuming LE means little endian [84, 01, 00, 00] = 388 byte
// TODO: Semi-confident in this one
buff = new Uint32Array([imgSize]).buffer;
file.write(buff, 'binary');
// Specify image offset in bytes
// TODO: Not that confident in this one [16]
buff = new Uint32Array([22]).buffer;
file.write(buff, 'binary');
// Dump the .png
file.write(imgBlob, 'binary');
return file.getBlob('image/vnd.microsoft.icon');
}
function test() {
const ico = convertToIco(imgSize, imgBlob);
let url = window.URL.createObjectURL(ico);
a.href = url;
document.getElementById('result1').style.display = '';
}
file1.addEventListener('change', () => {
var file = file1.files[0];
var fileReader = new FileReader();
fileReader.onloadend = function(e) {
var arrayBuffer = e.target.result;
const blobFile = new Blob([arrayBuffer]);
const ico = convertToIco(blobFile.size, blobFile);
let url = window.URL.createObjectURL(ico);
a2.href = url;
document.getElementById('result2').style.display = '';
}
fileReader.readAsArrayBuffer(file);
});
.section {
padding: 4px;
border: 1px solid gray;
margin: 4px;
}
h3 {
margin: 0 0 4px 0;
}
<div class="section">
<h3>Convert img[src] to ICO</h3>
<div>
Example (PNG): <img src="https://gist.githubusercontent.com/twolfson/7656254/raw/c5d0dcedc0e212f177695f08d685af5aad9ff865/sprite1.png" id="input" />
</div>
<div>
<button onclick="test()">Conver to ICO</button>
</div>
<div id="result1" style="display:none">
Save ICO: <a id="a" href="#" download="example.ico">click me</a>
</div>
</div>
<div class="section">
<h3>Convert input[type=file] to ICO</h3>
<input type="file" id="file" />
<div id="result2" style="display:none">
Save ICO: <a id="a2" href="#" download="example.ico">click me</a>
</div>
</div>
P.S. Used documentation: Wikipedia.
Related
I am getting this error while using npm pica library. I want to resize image using this library. Have tried two ways
I. by passing image url directly to pica.resize(imageurl, canvas) and
II. by passing image buffer to pica.resize(imageBuffer,canvas). But it shows error Pica: cannot use getImageData on canvas, make sure fingerprinting protection isn't enabled. when I run code.
// Convert Base64 to BufferArray
const buf = Buffer.from(img, 'base64');
let binary = '';
const chunk = 8 * 1024;
let i;
for (i = 0; i < buf.length / chunk; i += 1) {
binary += String.fromCharCode.apply(null, [
...buf.slice(i * chunk, (i + 1) * chunk),
]);
}
binary += String.fromCharCode.apply(null, [...buf.slice(i * chunk)]);
// Dimensions for the image
const width = 1200;
const height = 627;
// Instantiate the canvas object
const can = canvas.createCanvas(width,height);
// const context = canvas.getContext("2d");
console.log(binary)
pica.resize(binary, can)
.then(result => console.log(result));
🖐
I'm using WaveSurfer js for my project where we can edit an audio.
For that i use the region plugin.
When the users clicks the button finish, I want to export the result in a audio file (mp3/wav)
To get the peaks of the audio where the user selected his audio, I do:
var json = wavesurfer.backend.getPeaks(960, wavesurfer.regions.list["wavesurfer_j99v7ophop8"].start, wavesurfer.regions.list["wavesurfer_j99v7ophop8"].end)
This works but i want to export it as an audio file and not a json
Thanks in advance
See this answer and you can create an Audio Wav file from buffer.
So the code and method might be like this:
Method:
// Convert a audio-buffer segment to a Blob using WAVE representation
// The returned Object URL can be set directly as a source for an Auido element.
// (C) Ken Fyrstenberg / MIT license
function bufferToWave(abuffer, offset, len) {
var numOfChan = abuffer.numberOfChannels,
length = len * numOfChan * 2 + 44,
buffer = new ArrayBuffer(length),
view = new DataView(buffer),
channels = [], i, sample,
pos = 0;
// write WAVE header
setUint32(0x46464952); // "RIFF"
setUint32(length - 8); // file length - 8
setUint32(0x45564157); // "WAVE"
setUint32(0x20746d66); // "fmt " chunk
setUint32(16); // length = 16
setUint16(1); // PCM (uncompressed)
setUint16(numOfChan);
setUint32(abuffer.sampleRate);
setUint32(abuffer.sampleRate * 2 * numOfChan); // avg. bytes/sec
setUint16(numOfChan * 2); // block-align
setUint16(16); // 16-bit (hardcoded in this demo)
setUint32(0x61746164); // "data" - chunk
setUint32(length - pos - 4); // chunk length
// write interleaved data
for(i = 0; i < abuffer.numberOfChannels; i++)
channels.push(abuffer.getChannelData(i));
while(pos < length) {
for(i = 0; i < numOfChan; i++) { // interleave channels
sample = Math.max(-1, Math.min(1, channels[i][offset])); // clamp
sample = (0.5 + sample < 0 ? sample * 32768 : sample * 32767)|0; // scale to 16-bit signed int
view.setInt16(pos, sample, true); // update data chunk
pos += 2;
}
offset++ // next source sample
}
// create Blob
return (URL || webkitURL).createObjectURL(new Blob([buffer], {type: "audio/wav"}));
function setUint16(data) {
view.setUint16(pos, data, true);
pos += 2;
}
function setUint32(data) {
view.setUint32(pos, data, true);
pos += 4;
}
}
Usage:
let originalBuffer = bufferToWave(app.engine.wavesurfer.backend.buffer, 0, app.engine.wavesurfer.backend.buffer.length);
HTMLCanvasElement has toDataURL(), but OffscreenCanvas does not have. What a surprise.
Ok, so how can i get this toDataURL() to work with Worker-s? I have a ready canvas (fully drawn), and can send it to a Worker. But then what can i do from there?
The only solution i have, is to manually do all operations to create an image/png. So i found this page from 2010. (I am not sure if that is what i need though.) And further it provides this code, from where it generates a PNG and makes it to base64.
And my final question:
1 - Is there some reasonable way to get toDataURL() from Worker, OR
2 - Is there any library or something designed to for this job, OR
3 - Using all functionalities of HTMLCanvasElement and OffscreenCanvas, how should the following code be adapted to replace toDataURL()?
Here are the two functions from the code im linking to. (They are really complicated for me, and i understand almost nothing from getDump())
// output a PNG string
this.getDump = function() {
// compute adler32 of output pixels + row filter bytes
var BASE = 65521; /* largest prime smaller than 65536 */
var NMAX = 5552; /* NMAX is the largest n such that 255n(n+1)/2 + (n+1)(BASE-1) <= 2^32-1 */
var s1 = 1;
var s2 = 0;
var n = NMAX;
for (var y = 0; y < this.height; y++) {
for (var x = -1; x < this.width; x++) {
s1+= this.buffer[this.index(x, y)].charCodeAt(0);
s2+= s1;
if ((n-= 1) == 0) {
s1%= BASE;
s2%= BASE;
n = NMAX;
}
}
}
s1%= BASE;
s2%= BASE;
write(this.buffer, this.idat_offs + this.idat_size - 8, byte4((s2 << 16) | s1));
// compute crc32 of the PNG chunks
function crc32(png, offs, size) {
var crc = -1;
for (var i = 4; i < size-4; i += 1) {
crc = _crc32[(crc ^ png[offs+i].charCodeAt(0)) & 0xff] ^ ((crc >> 8) & 0x00ffffff);
}
write(png, offs+size-4, byte4(crc ^ -1));
}
crc32(this.buffer, this.ihdr_offs, this.ihdr_size);
crc32(this.buffer, this.plte_offs, this.plte_size);
crc32(this.buffer, this.trns_offs, this.trns_size);
crc32(this.buffer, this.idat_offs, this.idat_size);
crc32(this.buffer, this.iend_offs, this.iend_size);
// convert PNG to string
return "\211PNG\r\n\032\n"+this.buffer.join('');
}
Here it is quite clear what is going on:
// output a PNG string, Base64 encoded
this.getBase64 = function() {
var s = this.getDump();
// If the current browser supports the Base64 encoding
// function, then offload the that to the browser as it
// will be done in native code.
if ((typeof window.btoa !== 'undefined') && (window.btoa !== null)) {
return window.btoa(s);
}
var ch = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";
var c1, c2, c3, e1, e2, e3, e4;
var l = s.length;
var i = 0;
var r = "";
do {
c1 = s.charCodeAt(i);
e1 = c1 >> 2;
c2 = s.charCodeAt(i+1);
e2 = ((c1 & 3) << 4) | (c2 >> 4);
c3 = s.charCodeAt(i+2);
if (l < i+2) { e3 = 64; } else { e3 = ((c2 & 0xf) << 2) | (c3 >> 6); }
if (l < i+3) { e4 = 64; } else { e4 = c3 & 0x3f; }
r+= ch.charAt(e1) + ch.charAt(e2) + ch.charAt(e3) + ch.charAt(e4);
} while ((i+= 3) < l);
return r;
}
Thanks
First, I'll note you probably don't want a data URL of your image file, data URLs are really a less performant way to deal with files than their binary equivalent Blob, and almost you can do with a data URL can actually and should generally be done with a Blob and a Blob URI instead.
Now that's been said, you can very well still generate a data URL from an OffscreenCanvas.
This is a two step process:
call its convertToBlob method
read the generated Blob as a dataURL using a FileReader[Sync]
const worker = new Worker(getWorkerURL());
worker.onmessage = e => console.log(e.data);
function getWorkerURL() {
return URL.createObjectURL(
new Blob([worker_script.textContent])
);
}
<script id="worker_script" type="ws">
const canvas = new OffscreenCanvas(150,150);
const ctx = canvas.getContext('webgl');
canvas[
canvas.convertToBlob
? 'convertToBlob' // specs
: 'toBlob' // current Firefox
]()
.then(blob => {
const dataURL = new FileReaderSync().readAsDataURL(blob);
postMessage(dataURL);
});
</script>
Since what you want is actually to render what this OffscreenCanvas did produce, you'd be better to generate your OffscreenCanvas by transferring the control of a visible one.
This way you can send the ImageBitmap directly to the UI without any memory overhead.
const offcanvas = document.getElementById('canvas')
.transferControlToOffscreen();
const worker = new Worker(getWorkerURL());
worker.postMessage({canvas: offcanvas}, [offcanvas]);
function getWorkerURL() {
return URL.createObjectURL(
new Blob([worker_script.textContent])
);
}
<canvas id="canvas"></canvas>
<script id="worker_script" type="ws">
onmessage = e => {
const canvas = e.data.canvas;
const gl = canvas.getContext('webgl');
gl.viewport(0, 0,
gl.drawingBufferWidth, gl.drawingBufferHeight);
gl.enable(gl.SCISSOR_TEST);
// make some slow noise (we're in a Worker)
for(let y=0; y<gl.drawingBufferHeight; y++) {
for(let x=0; x<gl.drawingBufferWidth; x++) {
gl.scissor(x, y, 1, 1);
gl.clearColor(Math.random(), Math.random(), Math.random(), 1.0);
gl.clear(gl.COLOR_BUFFER_BIT);
}
}
// draw to visible <canvas> in FF
if(gl.commit) gl.commit();
};
</script>
If you really absolutely need an <img>, then create a BlobURI from the generated Blob. But note that doing so, you do keep the image in memory once (which is still far better than the thrice induced by data URL, but still, don't do this with animated content).
const worker = new Worker(getWorkerURL());
worker.onmessage = e => {
document.getElementById('img').src = e.data;
}
function getWorkerURL() {
return URL.createObjectURL(
new Blob([worker_script.textContent])
);
}
img.onerror = e => {
document.body.textContent = '';
const a = document.createElement('a');
a.href = "https://jsfiddle.net/5yhg2c9L/";
a.textContent = "Your browser doesn't like StackSnippet's null origined iframe, please try again from this jsfiddle";
document.body.append(a);
};
<img id="img">
<script id="worker_script" type="ws">
const canvas = new OffscreenCanvas(150,150);
const gl = canvas.getContext('webgl');
gl.viewport(0, 0,
gl.drawingBufferWidth, gl.drawingBufferHeight);
gl.enable(gl.SCISSOR_TEST);
// make some slow noise (we're in a Worker)
for(let y=0; y<gl.drawingBufferHeight; y++) {
for(let x=0; x<gl.drawingBufferWidth; x++) {
gl.scissor(x, y, 1, 1);
gl.clearColor(Math.random(), Math.random(), Math.random(), 1.0);
gl.clear(gl.COLOR_BUFFER_BIT);
}
}
canvas[
canvas.convertToBlob
? 'convertToBlob' // specs
: 'toBlob' // current Firefox
]()
.then(blob => {
const blobURL = URL.createObjectURL(blob);
postMessage(blobURL);
});
</script>
(Note that you could also transfer an ImageBitmap from the Worker to the main thread and then draw it on a visible canvas, but in this case, using a tranferred context is even better.)
Blob is great, but actually toBlob && convertToBlob are much much slower compared to toDataUrl on Chrome, I really dont understand why....
On chrome, a 1920*1080 canvas, toDataUrl toke me 20ms, toBlob toke 300-500ms, convertToBlob toke 800ms.
So if the performance is an issue, I would rather use toDataUrl in main thread.
I am very new to Swift programming, so please pardon me.
What I am trying to do is Im trying to do a voice recording, and below is the method to get the stream from microphone of the iPhone. I am successful at storing the stream to a WAV file inside the device by using this method. I confirmed that the file is OK and is playing fine after the recording
osErr = ExtAudioFileWrite(destinationFile!, frameCount, inputDataList)
What i am unsuccessful in doing is capturing the audio stream, pass the stream to the WKWebview as base64 encoded data, converting the data into a Blob then pass the Blob as a source in HTML5 Audio tag using
URL.createObjectURL(new Blob( [blob], { type:{"audio/wav"} })
The Audio element just throws an Error and does not play the blob. I dont know if what I am getting at the sendAudioBuffer part of my code is the correct data that i should pass.
I am really very thankful for any bits of information. The original code where i got the method is from here
https://gist.github.com/hotpaw2/ba815fc23b5d642705f2b1dedfaf0107
func processMicrophoneBuffer( inputDataList: UnsafeMutablePointer<AudioBufferList>, frameCount : UInt32 ) {
let inputDataPtr = UnsafeMutableAudioBufferListPointer(inputDataList)
let mBuffers : AudioBuffer = inputDataPtr[0]
let count = Int(frameCount)
let bufferPointer = UnsafeMutableRawPointer(mBuffers.mData)
if let bptr = bufferPointer {
let dataArray = bptr.assumingMemoryBound(to: Float.self)
var sum : Float = 0.0
var j = self.circInIdx
let m = self.circBuffSize
for i in 0..<(count/2) {
let x = Float(dataArray[i+i ]) // copy left channel sample
let y = Float(dataArray[i+i+1]) // copy right channel sample
self.circBuffer[j ] = x
self.circBuffer[j + 1] = y
j += 2 ; if j >= m { j = 0 } // into circular buffer
sum += x * x + y * y
}
self.circInIdx = j // circular index will always be less than size
if sum > 0.0 && count > 0 {
let tmp = 5.0 * (logf(sum / Float(count)) + 20.0)
let r : Float = 0.2
audioLevel = r * tmp + (1.0 - r) * audioLevel
}
}
//This will write to a file
osErr = ExtAudioFileWrite(destinationFile!, frameCount, inputDataList)
//This will get the audio stream and send to WKWebview to be converted to Blob
sendAudioBuffer(buffer: Data(bytes: bufferPointer!, count: count))
}
... converting the
data into a Blob then pass the Blob as a source in HTML5 Audio tag
using
URL.createObjectURL(new Blob( [blob], { type:{"audio/wav"} })
You have wrong syntax in the row above. The correct one is:
URL.createObjectURL(new Blob([blob], {type:"audio/wav"}))
Also if your blob is really a base64 string, you can get rid of createObjectURL and set the src attribute of your <source> in HTML5 Audio tag as a data-uri e. g.:
document.querySelector('#my_audio source').src = 'data:audio/wav;base64,' + blob;
Hope it helps.
I have binary nodejs Buffer object that contains bitmap information. How do make image from the buffer and save it to file?
Edit:
I tried using the file system package as #herchu said but if I do this:
let robot = require("robotjs")
let fs = require('fs')
let size = 200
let img = robot.screen.capture(0, 0, size, size)
let path = 'myfile.png'
let buffer = img.image
fs.open(path, 'w', function (err, fd) {
if (err) {
// Something wrong creating the file
}
fs.write(fd, buffer, 0, buffer.length, null, function (err) {
// Something wrong writing contents!
})
})
I get
Although solutions by #herchu and #Jake work, they are extremely slow (10-15s in my experience).
Jimp supports converting Raw Pixel Buffer into PNG out-of-the-box and works a lot faster (sub-second).
const img = robot.screen.capture(0, 0, width, height).image;
new Jimp({data: img, width, height}, (err, image) => {
image.write(fileName);
});
Note: I am editing my answer according to your last edits
If you are using Robotjs, check that its Bitmap object contains a Buffer to raw pixels data -- not a PNG or any other file format contents, just pixels next to each other (exactly 200 x 200 elements in your case).
I have not found any function to write contents in other format in the Robotjs library (not that I know it either), so in this answer I am using a different library, Jimp, for the image manipulation.
let robot = require("robotjs")
let fs = require('fs')
let Jimp = require('jimp')
let size = 200
let rimg = robot.screen.capture(0, 0, size, size)
let path = 'myfile.png'
// Create a new blank image, same size as Robotjs' one
let jimg = new Jimp(size, size);
for (var x=0; x<size; x++) {
for (var y=0; y<size; y++) {
// hex is a string, rrggbb format
var hex = rimg.colorAt(x, y);
// Jimp expects an Int, with RGBA data,
// so add FF as 'full opaque' to RGB color
var num = parseInt(hex+"ff", 16)
// Set pixel manually
jimg.setPixelColor(num, x, y);
}
}
jimg.write(path)
Note that the conversion is done by manually iterating through all pixels; this is slow in JS. Also there are some details on how each library handles their pixel format, so some manipulation was needed in the loop -- it should be clear from the embedded comments.
Adding this as an addendum to accepted answer from #herchu, this code sample processes/converts the raw bytes much more quickly (< 1s for me for a full screen). Hope this is helpful to someone.
var jimg = new Jimp(width, height);
for (var x=0; x<width; x++) {
for (var y=0; y<height; y++) {
var index = (y * rimg.byteWidth) + (x * rimg.bytesPerPixel);
var r = rimg.image[index];
var g = rimg.image[index+1];
var b = rimg.image[index+2];
var num = (r*256) + (g*256*256) + (b*256*256*256) + 255;
jimg.setPixelColor(num, x, y);
}
}
Four times faster!
About 280ms and 550Kb for full screen 1920x1080, if use this script.
I found this pattern when I compared 2 byte threads per byte to the forehead.
const robotjs = require('robotjs');
const Jimp = require('jimp');
const app = require('express').Router();
app.get('/screenCapture', (req, res)=>{
let image = robotjs.screen.capture();
for(let i=0; i < image.image.length; i++){
if(i%4 == 0){
[image.image[i], image.image[i+2]] = [image.image[i+2], image.image[i]];
}
}
var jimg = new Jimp(image.width, image.height);
jimg.bitmap.data = image.image;
jimg.getBuffer(Jimp.MIME_PNG, (err, result)=>{
res.set('Content-Type', Jimp.MIME_PNG);
res.send(result);
});
});
If you add this code before jimp.getBuffer you'll get about 210ms and 320Kb for full screen
jimg.rgba(true);
jimg.filterType(1);
jimg.deflateLevel(5);
jimg.deflateStrategy(1);
I suggest you to take a look on sharp as it has superior performance metrics over jimp.
The issue with robotjs screen capturing, which actually happened to be very efficient, is BGRA color model and not RGBA. So you would need to do additional color rotation.
Also, as we take screenshot from the desktop I can't imagine the case where we would need transperency. So, I suggest to ignore it.
const [left, top, width, height] = [0, 0, 100, 100]
const channels = 3
const {image, width: cWidth, height: cHeight, bytesPerPixel, byteWidth} = robot.screen.capture(left, right, width, height)
const uint8array = new Uint8Array(cWidth*cHeight*channels);
for(let h=0; h<cHeight; h+=1) {
for(let w=0; w<cWidth; w+=1) {
let offset = (h*cWidth + w)*channels
let offset2 = byteWidth*h + w*bytesPerPixel
uint8array[offset] = image.readUInt8(offset2 + 2)
uint8array[offset + 1] = image.readUInt8(offset2 + 1)
uint8array[offset + 2] = image.readUInt8(offset2 + 0)
}
}
await sharp(Buffer.from(uint8array), {
raw: {
width: cWidth,
height: cHeight,
channels,
}
}).toFile('capture.png')
I use intermediate array here, but you actually can just to swap in the result of the screen capture.