I am trying to read from a excel file, manipulate and create another excel file from it, i am using stream support for this. Most of it is working fine but i see resultant excel file containing {"sharedString":0} instead of actual values.
Below is my relevant code
let ws = fs.createWriteStream(fpath);
const workbook = new ExcelJS.stream.xlsx.WorkbookWriter({stream: ws, useStyles: true, useSharedStrings: true});
const myworksheet = workbook.addWorksheet('sheet1');
const workbookReader = new ExcelJS.stream.xlsx.WorkbookReader(sheet.path, options);
workbookReader.read();
workbookReader.on('worksheet', worksheet => {
worksheet.on('row', row => {
myworksheet.addRow(row.values);
});
});
workbookReader.on('shared-strings', sharedString => {
console.log('not coming here');
});
workbookReader.on('end', async () => {
console.log('processing done...');
await workbook.commit();
});
Please see the attached file for your reference.
Any help on how to fix this will be really great, Thanks.
Once i created the WorkbookReader with below options
const options = {
sharedStrings: 'cache',
hyperlinks: 'cache',
worksheets: 'emit',
styles: 'cache',
};
it worked!
Related
I'm trying to fetch the text contents of the first page of a PDF file using NPM node module 'PDF-lib'.
However when I fetch the contents and print the results, I instead get an array of data that looks something like below;
Could you please help me spot the problem?
Thanks in advance!
The results I get after printing look like this. What I want to fetch are the actual text contents of the PDF page.
PDFPage {
fontSize: 24,
fontColor: { type: 'RGB', red: 0, green: 0, blue: 0 },
lineHeight: 24,
x: 0,
y: 0,
node: PDFPageLeaf {
dict: Map(8) {
[PDFName] => [PDFName],
[PDFName] => [PDFRef],
[PDFName] => [PDFDict],
[PDFName] => [PDFArray],
[PDFName] => [PDFRef],
[PDFName] => [PDFDict],
[PDFName] => [PDFName],
[PDFName] => [PDFNumber]
},
...
...
...
The Code:
const { resolve } = require('path');
const { PDFDocument } = require('pdf-lib'); // Library for reading PDF file
const fs = require('fs');
async function readDataset() {
try {
// Get PDF Page
const content = await PDFDocument.load(fs.readFileSync(resolve(`./app/assets/pdfs/np.pdf`)));
// Get page contents
const contentPages = content.getPages();
let pageContent = contentPages[0];
// Return data found on first page
return pageContent;
}
catch (err) {
return err;
}
}
// Read data from dataset
let dataset = await readDataset();
Not generally possible at present (2021 ) with this library see current Limitations this info is also on the npm page at https://www.npmjs.com/package/pdf-lib#limitations
#1
pdf-lib can extract the content of text fields (see PDFTextField.getText), but it cannot extract plain text on a page outside of a form field. This is a difficult feature to implement, but it is within the scope of this library and may be added to pdf-lib in the future. See #93, #137, #177, #329, and #380.
For future visitors always check the link above for current status.
How do I get input file path using NeutralinoJS?
My Code:
<input type="file" id="inputFile">
const inputFilePath = document.getElementById('inputFile').files[0].path
console.log(inputFilePath)
I don't think browsers allow you to get file paths.
You could use the file picker API instead os.showDialogOpen(DialogOpenOptions):
https://neutralino.js.org/docs/api/os#osshowdialogopendialogopenoptions
<button onclick="onFileUpload()">
async onFileUpload () {
let response = await Neutralino.os.showDialogOpen({
title: 'Select a file'
})
console.log(`You've selected: ${response.selectedEntry}`)
}
Why do you need the path? If you need the content from the upload file you can get it via javascript filereader API and use the contents.
If you need the file for later use you can read the file via js filereader and then create and save a new file with filesystem.writeFile(WriteFileOptions) to your prefered location (maybe app internal temp path). Be sure the destination path exists. For that you can use filesystem.createDirectory(CreateDirectoryOptions).
Example with jQuery:
jQuery(document).on('change','#myUpload',function(){ //Click on file input
if(jQuery(this).val().length > 0){ //Check if a file was chosen
let input_file = this.files[0];
let file_name = input_file.name;
let fr = new FileReader();
fr.onload = function(e) {
fileCont = e.target.result;
//Do something with file content
saveMyFile(file_name, fileCont); //execute async function to save file
};
fr.readAsText(input_file);
}
});
async function saveMyFile(myFileName, myFileContent){
await Neutralino.filesystem.createDirectory({ path: './myDestPath' }).then(data => { console.log("Path created."); },() => { console.log("Path already exists."); }); //create path if it does not exist
//write the file:
await Neutralino.filesystem.writeFile({
fileName: './myDestPath/' + myFileName,
data: myFileContent
});
}
You can use the Neutralino.os API for showing the Open/Save File Dialogs.
This is A Example For Opening A File.
HTML:
<button type="button" id="inputFile">Open File</button>
JavaScript:
document.getElementById("inputFile").addEventListener("click", openFile);
async function openFile() {
let entries = await Neutralino.os.showOpenDialog('Save your diagram', {
filters: [
{name: 'Images', extensions: ['jpg', 'png']},
{name: 'All files', extensions: ['*']}
]
});
console.log('You have selected:', entries);
}
So I having problems with my csv-parser that is reading values where it adds a column on empty cells from a csv file. It gives an error of
column header mismatch expected: 17 columns got: 18
For now I have to go in the csv file and backspace a comma to match the columns. I know is a parse csv issue, has anyone encounter this? below is my csv code.
function readStream () {
let stream = fs.createReadStream("accounts.csv");
fast
.fromStream(stream, {
headers: true
})
.on("data" , fetchYelp, fetchWhitePages, fetchGooglePlace, writeStream
)
.on("end", function () {
console.log("Done Reading");
});
}
readStream();
Could you try using the discardUnmappedColumns option, e.g. ? That works for me!
function readStream () {
let stream = fs.createReadStream("accounts.csv");
fast
.fromStream(stream, {
headers: true,
discardUnmappedColumns: true
})
.on("data" , fetchYelp, fetchWhitePages, fetchGooglePlace, writeStream ) {
})
.on("end", function () {
console.log("Done Reading");
});
}
readStream();
This URL below points to a zip file which contains a file called bundlesizes.json. I am trying to read the contents of that json file within my React application (no node server/backend involved)
https://dev.azure.com/uifabric/cd9e4e13-b8db-429a-9c21-499bf1c98639/_apis/build/builds/8838/artifacts?artifactName=drop&api-version=4.1&%24format=zip
I was able to get the contents of the zip file by doing the following
const url =
'https://dev.azure.com/uifabric/cd9e4e13-b8db-429a-9c21-499bf1c98639/_apis/build/builds/8838/artifacts?artifactName=drop&api-version=4.1&%24format=zip';
const response = await Axios({
url,
method: 'GET',
responseType: 'stream'
});
console.log(response.data);
This emits the zip file (non-ascii characters). However, I am looking to read the contents of the bundlesizes.json file within it.
For that I looked up jszip and tried the following,
var zip = new JSZip();
zip.createReader(
new zip.BlobReader(response.data),
function(reader: any) {
// get all entries from the zip
reader.getEntries(function(entries: any) {
if (entries.length) {
// get first entry content as text
entries[0].getData(
new zip.TextWriter(),
function(text: any) {
// text contains the entry data as a String
console.log(text);
// close the zip reader
reader.close(function() {
// onclose callback
});
},
function(current: any, total: any) {
// onprogress callback
console.log(current);
console.log(total);
}
);
}
});
},
function(error: any) {
// onerror callback
console.log(error);
}
);
However, this does not work for me, and errors out.
This is the error I receive
How can I read the contents of the file within the zip within my React application by using Javascript/Typescript?
I'm building a mobile app with Cordova. I am using PouchDB for local storage so the app works without internet. PouchDB syncs with a CouchDB server so you can access your data everywere.
Now, i've got to the point where I need to add a function to upload (multiple) files to a document. (files like .png .jpg .mp3 .mp4 all the possible file types).
My original code without the file upload:
locallp = new PouchDB('hbdblplocal-'+loggedHex);
function addItem() {
//get info
var itemTitle = document.getElementById('itemTitle').value;
var itemDesc = document.getElementById('itemDesc').value;
var itemDate = document.getElementById('itemDate').value;
var itemTime = document.getElementById('itemTime').value;
//get correct database
console.log(loggedHex);
console.log(loggedInUsername);
//add item to database
var additem = {
_id: new Date().toISOString(),
title: itemTitle,
description: itemDesc,
date: itemDate,
time: itemTime
};
locallp.put(additem).then(function (result){
console.log("Added to the database");
location.href = "listfunction.html";
}).catch(function (err){
console.log("someting bad happened!");
console.log(err);
});
}
I'll add a link to a JSfiddle where I show my attempt to add the file upload. i've also included the html part.
link to jsfiddle: click here
I've noticed an error in the console about there not being a content-type.
Is there someone who can help me?
I think you're not setting the content_type of your attachment right. Try changing type to content_type like so:
var additem = {
_id: new Date().toISOString(),
title: itemTitle,
description: itemDesc,
date: itemDate,
time: itemTime,
_attachments: {
"file": {
content_type: getFile.type,
data: getFile
}
}
};
Also see the docs for working with attachments.