Convert ICO icon file to PNG image file using plain Javascript - javascript

I want to..
.. convert an ICO file (e.g. http://www.google.com/favicon.ico ) to a PNG file after I downloaded it.
.. preserve transparency.
.. apply the solution in a node.js application.
I don't want to and already tried to ..
.. use native tools such as imagemagick (that's what I currently use in my application, but it's really bad for maintaining platform independency).
.. use tools that internally use native tools (e.g. gm.js).
.. rely on webservices such as http://www.google.com/s2/favicons?domain=www.google.de that don't allow configuring the resulting size or require payments or logins.
Therefore I'd love a Javascript-only solution. I used Jimp in another application, but it does not support ICO files.
Any help is appreciated. Thanks!

Use a FileReader() . Convert the Base64 to a data/png. Done.
const inputFile = __dirname + "/favicon.ico";
const outputFile = __dirname + "/favicon.png";
(function( inputFile, outputFile ) {
const fileApi = require("file-api");
const fs = require("fs");
const File = fileApi.File;
var fileReader = new fileApi.FileReader();
fileReader.readAsDataURL(new File(inputFile));
fileReader.addEventListener("load", function (ev) {
var rawdata = ev.target.result;
rawdata = rawdata.replace(/.*base64,/, "");
fs.writeFileSync(outputFile, rawdata, "base64");
});
})(inputFile, outputFile);

I am not familiar with Node environment but I wrote this ES6 module PNG2ICOjs using purely Javascript ArrayBuffer or Blob and can 100% run on client-side browsers (I assume Node file should act like a Blob).
import { PngIcoConverter } from "../src/png2icojs.js";
// ...
const inputs = [...files].map(file => ({
png: file
}));
// Result is a Blob
const resultBlob1 = await converter.convertToBlobAsync(inputs); // Default mime type is image/x-icon
const resultBlob2 = await converter.convertToBlobAsync(inputs, "image/your-own-mime");
// Result is an Uint8Array
const resultArr = await converter.convertAsync(inputs);

Related

Attempting to download a .xlsx file and upload to Azure Blob Storage using Playwright with Javascript produces malformed .xlsx file with a bad header

I am trying to download an excel file and then upload it to Azure Blob Storage for use in Azure Data Factory. I have a playwright javascript that worked when the file was a .csv but when I try the same code with an excel file, it will not open in Excel. It says,
"We found a problem with some content in 'order_copy.xlsx'. Do you want us to try to recover as much as we can?:
After clicking yes, it says,
"Excel cannot open the file 'order_copy.xlsx' because the file format or file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file."
Any ideas on how to use the createReadStream more effectively to do this and preserve the .xlsx format?
I don't think the saveAs method will work since this code is being executed in an Azure Function with no access to a local known path.
My first thought was the content type was not right, so I set that, but it still did not work. I tried a UTF-8 encoder but that also did not work.
//const data = await streamToText(download_csv.createReadStream())
const download_reader = await download_csv.createReadStream();
let data = '';
for await (const chunk of download_reader) {
data += chunk; //---I suspect I need to do something different here
}
// const data_utf8 = utf8.encode(data) //unescape( encodeURIComponent(data) );
const AZURE_STORAGE_CONNECTION_STRING = "..." //---Removed string here
// Create the BlobServiceClient object which will be used to create a container client
const blob_service_client = BlobServiceClient.fromConnectionString(AZURE_STORAGE_CONNECTION_STRING);
// Get a reference to a container
const container_client = blob_service_client.getContainerClient('order');
const blob_name = 'order_copy.xlsx';
// Get a block blob client
const block_blob_client = container_client.getBlockBlobClient(blob_name);
const contentType = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
const blobOptions = { blobHTTPHeaders: { blobContentType: contentType } };
//const uploadBlobResponse = await block_blob_client.upload(data_utf8, data_utf8.length, blobOptions);
const uploadBlobResponse = await block_blob_client.upload(data, data.length, blobOptions);
console.log("Blob was uploaded successfully. requestId: ", uploadBlobResponse.requestId);
Any guidance would be appreciated. Thank you in advance for your help!
-Chad
Thanks #Gaurav for the suggestion on not setting the data to a string. The following code worked after I changed to using a array of the chunks and concatenated it using the Buffer similar to your suggested code.
let chunks = []
for await (const chunk of download_reader) {
chunks.push(chunk)
}
const fileBuffer = Buffer.concat(chunks)
...
const uploadBlobResponse = await block_blob_client.upload(fileBuffer, fileBuffer.length, blobOptions);
Thanks everyone!

Reading a local binary file in javascript and converting to base64

I have a local site which uses Javascript to browse files on my machine. This is not a NodeJS question. I have been reading binary files on my local filesystem and converting them to base64. The problem I'm having is when there are non-printable characters. The output I get from javascript is different to the base64 command line tool in Linux.
An example file, which we can use for this question, was generated with head -c 8 /dev/random > random -- it's just some binary nonsense written to a file. On this example it yielded the following:
$ base64 random
Tg8j3hAv/u4=
If you want to play along at home you can run this to generate the same file:
echo -n 'Tg8j3hAv/u4=' | base64 -d > random
However, when I try and read that file in Javascript and convert it to base64 I get a different result:
Tg8j77+9EC/vv73vv70=
It looks kind of similar, but with some other characters in there.
Here's how I got it:
function readTextFile(file)
{
let fileContents;
var rawFile = new XMLHttpRequest();
rawFile.open("GET", file, false);
rawFile.onreadystatechange = function ()
{
if(rawFile.readyState === 4)
{
if(rawFile.status === 200 || rawFile.status == 0)
{
fileContents = rawFile.responseText;
}
}
}
rawFile.send(null);
return fileContents;
}
var fileContents = readTextFile("file:///Users/henrytk/tmp/stuff/random");
console.log(btoa(unescape(encodeURIComponent(fileContents))));
// I also tried
console.log(Base64.encode(fileContents));
// from http://www.webtoolkit.info/javascript_base64.html#.YVW4WaDTW_w
// but I got the same result
How is this happening? Is it something to do with how I'm reading the file? I want to be able to read that file synchronously in a way which can be run locally - no NodeJS, no fancy third-party libraries, if possible.
I believe this is the problem:
fileContents = rawFile.responseText
This will read your file as a JavaScript string, and not all binary is valid JavaScript character code points.
I will recommend using fetch to get a blob, since that is the method I know best:
async function readTextFileAsBlob(file) {
const response = await fetch( file );
const blob = await response.blob();
return blob;
}
Then, convert the blob to base64 using the browser's FileReader.
(Maybe that matches the Linux tool?)
const blobToBase64DataURL = blob => new Promise(
resolvePromise => {
const reader = new FileReader();
reader.onload = () => resolvePromise( reader.result );
reader.readAsDataURL( blob );
}
);
In your example, you would use these functions like this:
readTextFileAsBlob( "file:///Users/henrytk/tmp/stuff/random" ).then(
async blob => {
const base64URL = await blobToBase64DataURL( blob );
console.log( base64URL );
}
);
This will give you a URL like data://.... You'll need to split off the URL part, but if all goes well, the last bit should be the right base64 data. (Hopefully).

How do I read a Huge Json file into a single object using NodeJS?

I'm upgrading a backend system that uses require('./file.json') to read a 1GB Json file into a object, and then passes that object to other parts of the system to be used.
I'm aware of two ways to read json files into an object
const fs = require('fs');
const rawdata = fs.readFileSync('file.json');
const data = JSON.parse(rawdata);
and
const data = require('./file.json');
This works fine in older versions of node(12) but not in newer version (14 or 16)
So I need to find another way to get this 1GB big file.json into const data without running into the ERR_STRING_TOO_LONG / Cannot create a string longer than 0x1fffffe8 characters error.
I've seen examples on StackOverflow etc. on how to Stream Huge Json files like this and break it down into smaller objects processing them individually, but this is not what I'm looking for, I need it in one data object so that entire parts of the system that expect a single data object don't have to be refactored to handle a stream.
Note: The Top-level object in the Json file is not an array.
Using big-json solves this problem.
npm install big-json
const fs = require('fs');
const path = require('path');
const json = require('big-json');
const readStream = fs.createReadStream('file.json');
const parseStream = json.createParseStream();
parseStream.on('data', function(pojo) {
// => receive reconstructed POJO
});
readStream.pipe(parseStream);
You need to stream it, so process it in chunks instead of loading it all into memory in a single point in time.
const fs = require("fs");
const stream = fs.createReadStream("file.json");
stream.on("data", (data) => {
console.log(data.toString());
});

How to convert from a filename to a URL in Electron?

Let's say I get a directory listing of .jpg files and then want to display them
const fs = require('fs');
fs.readDirSync(someAbsolutePathToFolder)
.filter(f => f.endsWith('.jpg'))
.forEach(filename => {
const img = new Image();
const filepath = path.join(someAbsolutePathToFolder, filename);
img.src = `file://${filepath.replace(/\\/g, '/')}`;
document.body.appendChild(img);
});
This will fail. As just one example, if the listing I got had names like
#001-image.jpg
#002-image.jpg
the code above results in URLs like
file://some/path/#001-image.jpg
Which you can see is a problem. The URL will be cut at the #.
This answer claims I should be using encodeURI but that also fails as it leaves the # unescaped.
Using encodeURIComponent also does not work. It replaces / and : and space and Electron does not find the file, at least not on Windows.
Up until this point I had something like this
const filepath = path.join(someAbsolutePathToFolder, filename).replace(/\\/g, '/');
img.src = `file://${filepath.split('/').map(encodeURIComponent).join('/')}`;
but that also fails on windows because drive letters get convert from c:\dir\file to c%3a\dir\file which then appears to be a relative path to electron.
So I have more special cases trying to check for [a-z]: at at the beginning of the path as well as \\ for UNC paths.
Today I ran into the # issue mentioned above and I'm expecting more time bombs are waiting for me so I'm asking...
What is the correct way to convert a filename to a URL in a cross platform way?
Or to be more specific, how to solve the problem above. Given a directory listing of absolute paths of image files on the local platform generate URLs that will load those images if assigned to the src property of an image.
You can use Node's (v10+) pathToFileURL:
import { pathToFileURL } from 'url';
const url = pathToFileURL('/some/path/#001-image.jpg');
img.src = url.href;
See: https://nodejs.org/api/url.html#url_url_pathtofileurl_path
This works fine for me:
const open = require('open')
const path = require('path')
const FILE_NAME = 'картинка.jpg'
const filePath = path.join(__dirname, FILE_NAME)
const fileUrl = `file:///${ filePath.split('\\').join('/') }`
console.log(fileUrl)
open(fileUrl)

Access a URL and download a wav file using javascript

I am new to java and javascript programing.
I was able to write a piece of code in java to access a particular URL and download the wav file from that URL (this url returns a wav file).
url = new URL('url name');
input = url.openStream();
output = new FileOutputStream (wavfile);
byte[] buffer = new byte[500];
int bytesRead = 0;
while ((bytesRead = input.read(buffer, 0, buffer.length)) >= 0) {
output.write(buffer, 0, bytesRead);
}
output.close();
I do not want to do this in Java and want to be able to do it in Javascript. The main attempt is to use javascript and get the wav file and play it on a HTML5 enabled browser.
I eventually want to put it on an android platform, so do not prefer to use ajax or jquery.
Please suggest how this can be done in javascript.
Thanks,
axs
You couldn't use JavaScript to read files but HTML5 has a File API, take a look at:
http://dev.w3.org/2006/webapi/FileAPI/
READING FILES IN JAVASCRIPT USING THE FILE APIS
export async function loadWavFromUrl(audioContext, url) {
const response = await fetch(url);
const blob = await response.blob()
const arrayBuffer = await blob.arrayBuffer();
return await audioContext.decodeAudioData(arrayBuffer);
}
I recommend reusing AudioContext instances, which is why there is a param for it. You can call like this:
const ac = new AudioContext();
loadWavFromUrl(audioContext, url).then(audioBuffer => ...do stuff with it...);
The OP is maybe confused about reusing Javascript code in Java/Android. The code will be different.

Categories

Resources