My csv file is in drive 'E' folder 'awais' and i am new to coding i cant access the file anybody can help me with relative path absolute path concept and how can i access this file
i am getting this error : "Unhandled rejection Error: File does not exist. Check to make sure the file path to your csv is correct."
const fs = require('fs')
const path = require('path')
const csv = require('csvtojson');
const csvFile = ('/E/Awais/customer-data.csv');
csv()
.fromFile(csvFile).then ( (jsonObj) =>{
console.log(jsonObj)
fs.writeFile(path.join(__dirname,'customer-data.json'),JSON.stringify(jsonObj,null,1),()=>{
console.log(jsonObj)
});
})
```
You might wanna try:
const csvFile = ('E:/Awais/customer-data.csv');
That would be an absolute path. If you want to use a relative path (meaning that the csv is located in a folder relative to where your code file is stored) you would need to let us know in which folder your code resides. Alternatively (since you already imported path) you could also do:
const csvFile = path.normalize('E:\\Awais\\customer-data.csv');
Related
I am trying to download an excel file and then upload it to Azure Blob Storage for use in Azure Data Factory. I have a playwright javascript that worked when the file was a .csv but when I try the same code with an excel file, it will not open in Excel. It says,
"We found a problem with some content in 'order_copy.xlsx'. Do you want us to try to recover as much as we can?:
After clicking yes, it says,
"Excel cannot open the file 'order_copy.xlsx' because the file format or file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file."
Any ideas on how to use the createReadStream more effectively to do this and preserve the .xlsx format?
I don't think the saveAs method will work since this code is being executed in an Azure Function with no access to a local known path.
My first thought was the content type was not right, so I set that, but it still did not work. I tried a UTF-8 encoder but that also did not work.
//const data = await streamToText(download_csv.createReadStream())
const download_reader = await download_csv.createReadStream();
let data = '';
for await (const chunk of download_reader) {
data += chunk; //---I suspect I need to do something different here
}
// const data_utf8 = utf8.encode(data) //unescape( encodeURIComponent(data) );
const AZURE_STORAGE_CONNECTION_STRING = "..." //---Removed string here
// Create the BlobServiceClient object which will be used to create a container client
const blob_service_client = BlobServiceClient.fromConnectionString(AZURE_STORAGE_CONNECTION_STRING);
// Get a reference to a container
const container_client = blob_service_client.getContainerClient('order');
const blob_name = 'order_copy.xlsx';
// Get a block blob client
const block_blob_client = container_client.getBlockBlobClient(blob_name);
const contentType = 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
const blobOptions = { blobHTTPHeaders: { blobContentType: contentType } };
//const uploadBlobResponse = await block_blob_client.upload(data_utf8, data_utf8.length, blobOptions);
const uploadBlobResponse = await block_blob_client.upload(data, data.length, blobOptions);
console.log("Blob was uploaded successfully. requestId: ", uploadBlobResponse.requestId);
Any guidance would be appreciated. Thank you in advance for your help!
-Chad
Thanks #Gaurav for the suggestion on not setting the data to a string. The following code worked after I changed to using a array of the chunks and concatenated it using the Buffer similar to your suggested code.
let chunks = []
for await (const chunk of download_reader) {
chunks.push(chunk)
}
const fileBuffer = Buffer.concat(chunks)
...
const uploadBlobResponse = await block_blob_client.upload(fileBuffer, fileBuffer.length, blobOptions);
Thanks everyone!
What I want to do: I have on the server side a file 'test_download.xlsx', and want to send it to the client side so that I can apply XLSX.read() on the retrieved object.
So I tried it this way
Server :
let filename = 'test_download.xlsx';
const buffer = fs.readFileSync(filename)
res.json(buffer);
Client :
file = await axios.get('/dataexplorer/test');
console.log(file);
console.log(XLSX.read(file.data, {type: "buffer"}));
First log :
Second log :
The problem is that it doesn't match my excel file at all just in terms of sheets (my file has 3 different sheet names)
Do you have any idea what the problem is?
Thanks
On the server-side just use:
const filename = 'test_download.xlsx';
// make sure to include the name and the extension of the file in the path
const filepath = 'your/path/to/the/file/test_download.xlsx';
/** filename is optional:
* if you don't pass it, the name of the file in the filepath will be used
* if you pass it the file at hte filepath will be downloaded with the name of `filename`
*/
res.download(filepath, filename);
This will return a blob to the client(make sure to include the correct headers for the response type) and then you can just save it or work with it with :
file = await axios.get('/dataexplorer/test',{ responseType: "blob"});
const ab = await file.data.arrayBuffer();
XLSX.read(Buffer.from(ab),{type:"buffer"})
Right so I have a folder full of other folders, which are compressed into .gz files. Inside these folders is text files.
I want to have a program that loops through these text files to see if they contain a specific string, but to do so I need to uncompress them first. I don't want to start messing about with files (unless I can just make them temporarily and delete after), i just want to perform operations on the contents of the .gz folder. I've tried zlib.Gunzip()._outBuffer.toString() which gives a load of gibberish when used on a compressed folder.
How should I proceed?
Had to to something quite similar recently, here's what worked for me:
Basically you just read in the file into a buffer which you then can pass to the gunzip function. This will return another buffer on which you can invoke toString('utf8') in order the get the contents as a string, which is exactly what you need:
const util = require('util');
let {gunzip} = require('zlib');
const fs = require('fs');
gunzip = util.promisify(gunzip);
async function getStringFromGzipFile(inputFilePath) {
const sourceBuffer = await fs.promises.readFile(inputFilePath);
return await gunzip(sourceBuffer);
}
(async () => {
const stringContent = await getStringFromGzipFile('/path/to/file');
console.log(stringContent);
})()
EDIT:
If you want to gunzip and extract a directory, you can use tar-fs which will extract the contents to a specified directory. Once your done processing the files in it you could just remove the directory. Here's how you would gunzip and extract a .tar.gz:
function gunzipFolder(sourceDir, destination) {
fs.createReadStream(sourceDir)
.pipe(zlib.createGunzip())
.pipe(tar.extract(destination));
}
Let's say I get a directory listing of .jpg files and then want to display them
const fs = require('fs');
fs.readDirSync(someAbsolutePathToFolder)
.filter(f => f.endsWith('.jpg'))
.forEach(filename => {
const img = new Image();
const filepath = path.join(someAbsolutePathToFolder, filename);
img.src = `file://${filepath.replace(/\\/g, '/')}`;
document.body.appendChild(img);
});
This will fail. As just one example, if the listing I got had names like
#001-image.jpg
#002-image.jpg
the code above results in URLs like
file://some/path/#001-image.jpg
Which you can see is a problem. The URL will be cut at the #.
This answer claims I should be using encodeURI but that also fails as it leaves the # unescaped.
Using encodeURIComponent also does not work. It replaces / and : and space and Electron does not find the file, at least not on Windows.
Up until this point I had something like this
const filepath = path.join(someAbsolutePathToFolder, filename).replace(/\\/g, '/');
img.src = `file://${filepath.split('/').map(encodeURIComponent).join('/')}`;
but that also fails on windows because drive letters get convert from c:\dir\file to c%3a\dir\file which then appears to be a relative path to electron.
So I have more special cases trying to check for [a-z]: at at the beginning of the path as well as \\ for UNC paths.
Today I ran into the # issue mentioned above and I'm expecting more time bombs are waiting for me so I'm asking...
What is the correct way to convert a filename to a URL in a cross platform way?
Or to be more specific, how to solve the problem above. Given a directory listing of absolute paths of image files on the local platform generate URLs that will load those images if assigned to the src property of an image.
You can use Node's (v10+) pathToFileURL:
import { pathToFileURL } from 'url';
const url = pathToFileURL('/some/path/#001-image.jpg');
img.src = url.href;
See: https://nodejs.org/api/url.html#url_url_pathtofileurl_path
This works fine for me:
const open = require('open')
const path = require('path')
const FILE_NAME = 'картинка.jpg'
const filePath = path.join(__dirname, FILE_NAME)
const fileUrl = `file:///${ filePath.split('\\').join('/') }`
console.log(fileUrl)
open(fileUrl)
I want to..
.. convert an ICO file (e.g. http://www.google.com/favicon.ico ) to a PNG file after I downloaded it.
.. preserve transparency.
.. apply the solution in a node.js application.
I don't want to and already tried to ..
.. use native tools such as imagemagick (that's what I currently use in my application, but it's really bad for maintaining platform independency).
.. use tools that internally use native tools (e.g. gm.js).
.. rely on webservices such as http://www.google.com/s2/favicons?domain=www.google.de that don't allow configuring the resulting size or require payments or logins.
Therefore I'd love a Javascript-only solution. I used Jimp in another application, but it does not support ICO files.
Any help is appreciated. Thanks!
Use a FileReader() . Convert the Base64 to a data/png. Done.
const inputFile = __dirname + "/favicon.ico";
const outputFile = __dirname + "/favicon.png";
(function( inputFile, outputFile ) {
const fileApi = require("file-api");
const fs = require("fs");
const File = fileApi.File;
var fileReader = new fileApi.FileReader();
fileReader.readAsDataURL(new File(inputFile));
fileReader.addEventListener("load", function (ev) {
var rawdata = ev.target.result;
rawdata = rawdata.replace(/.*base64,/, "");
fs.writeFileSync(outputFile, rawdata, "base64");
});
})(inputFile, outputFile);
I am not familiar with Node environment but I wrote this ES6 module PNG2ICOjs using purely Javascript ArrayBuffer or Blob and can 100% run on client-side browsers (I assume Node file should act like a Blob).
import { PngIcoConverter } from "../src/png2icojs.js";
// ...
const inputs = [...files].map(file => ({
png: file
}));
// Result is a Blob
const resultBlob1 = await converter.convertToBlobAsync(inputs); // Default mime type is image/x-icon
const resultBlob2 = await converter.convertToBlobAsync(inputs, "image/your-own-mime");
// Result is an Uint8Array
const resultArr = await converter.convertAsync(inputs);