Unable to write data (addRow) into Excel file in Nodejs - javascript

I'm trying to write data into an Excel file using ExcelJS library. I was successfully able to create worksheet and add column data.
However, while trying to implement addRow() or addRows() method, the data is not added into the Excel worksheet.
Here is the code I tried:
const ExcelJS = require('exceljs');
var workbook = new ExcelJS.Workbook();
var worksheet = workbook.addWorksheet('Payment Data');
worksheet.columns = reportHeaders; //reportHeaders is an array of header objects
I'm able to see the columns created successfully in the excel sheet. The trouble starts from below, where I'm trying to add data (into rows):
1st method :
worksheet.addRows(excelData);//excelData is an array of data objects
2nd method:
for(var rowItem in excelData){
worksheet.addRow(excelData[rowItem]);}
However, it seems either of these methods aren't working for me.
Finally, the file is saved:
workbook.xlsx.writeFile('PaymentData.xlsx')
Is there anything I'm missing? Any help will be appreciated.

So the issue with your code was, you were trying to add the data to the columns without specifying the key property in the columns array and hence it was unable to add the data.
I modified the worksheet.columns array to look something like the following:
worksheet.columns = [
{ header: "A", key: "a" },
{ header: "B", key: "b" },
];
This will solve your problem

I managed to convert the object to an array and passed it to the addRow method.
This worked for me.
I'm still not sure why I'm not able to pass an array of objects to addRow method.

Related

Convert an excel xlsx file with nested headers into json

I am converting my excel file to json using the below library and it works fine when I have one header row and the remaining rows are data.
var excelToJson = require('convert-excel-to-json');
var result = excelToJson({
sourceFile: 'Auto.xlsx',
header: {
rows: 2 //Used 2 to skip the first line of headers
},
columnToKey: {
'*': '{{columnHeader}}'
}
});
But I have a file that has nested rows as headers similar to below and want to convert that into json.
Something like this, with multiple sheets
Appreciate any help or pointers. Thanks!

Parsing CSV file in vue/typescript

I am reading in a csv file and i am in the process of parsing through it but having some trouble.
1 - Before parsing the file i already have an array with strings of headers that i want to pull the data for from the csv file.
2- I want to parse the file so i can also display the data into a table with my predefined headers. And any extra headers they will be ignored from being displayed in the table.
Here is my code:
this.predefinedHeaders = ["Name", "Age", "Gender"];
readCSV(event: Event) {
const file = (event.target as HTMLInputElement).files![0];
var reader = new FileReader();
let text = (reader.result as string).split(/\r\n|\r|\n/);
let lines = [];
for( var i=1; i<text.length; i++) {
var data = text[i].split(',');
var tarr=[];
for(var j=0; j<this.predefinedHeaders.length; j++) {
tarr.push(data[j]);
}
lines.push(tarr);
}
this.tableData = lines;
}
reader.readAsText(file);
What is currently happening is that data is being populated to the table but not under the right headers.. How can i bind the data to my headers... NOTE: the predefined are guaranteed to be part of the original headers from the file. the difference is that it doesn't show data for all the columns such several of them.
HTML View:
table
thead
tr
th(v-for='column in predefinedColumns) {{column.name}}
tbody
tr(v-for='(a, index)in data')
td(v-for='(b, index2) in a') {{data[index][index2]}}
You might want to use most popular CSV parser which is PapaParse.
URLs for the deep documentation:
https://www.papaparse.com/demo
this library has various configuration options and one of them is 'Header row' which is exact solution you need.
to use the predefined header you can supply header argument as true to have all the data parsed as key-value pairs.
example: { data: Papa.parse(reader.result, { header: true }) }
with 'header: true', it will take first row of CSV file as key value for all the row in CSV file
NPM package for easiest implementation in javaScript app:
https://www.npmjs.com/package/papaparse
If you want to have the predefined headers and display only the table with only needed columns.
checkout this one of my example on codesandbox,
https://codesandbox.io/embed/llqmrp96pm
sample CSV file is already uploaded in the same directory so you would be able to upload it and see magic
this CSV file in there has 7 or 8 column but I am displaying only 4 columns, I assume that's what you are looking for
I see you are looking for JavaScript solution, my example is created with ReactJS and couple of NPM libraries but it is almost same as you are looking for, I believe it would be easier than anything to replicate in your code.

creating D3 Word Cloud by using an array of json objects instead of reading from json file

I am quite new to D3 and I have a D3 word cloud template that reads from a json file and then creates a word cloud. The part that reads from json file and inputs the keys and values into chart is :
d3.json("testdata.json", data => {
var chart = renderChart()
.svgHeight(600)
.container('#myGraph')
.data({ values: data })
.responsive(true)
.run()
})
What I wish to do is populate this word cloud from an array of json objects that are created by the program dynamically during the program execution that is why I cannot write it into a json file manually.
One of the many codes that I tried to use was this:
test =>{
var chart = renderChart()
.svgHeight(600)
.container('#myGraph')
.data({ values: test})
.responsive(true)
.run()
}
where test is my array of json objects.
The code is working with no errors but it is displaying nothing.
Any help much appreciated !
Fixed! code has to be:
var chart = renderChart()
.svgHeight(600)
.container('#myGraph')
.data({ values: test})
.responsive(true)
.run()

Appending new Google Sheets Data into BigQuery Table

So I'm new to all of this, both BigQuery and AppScript (coding in general..) and I'm learning as I go, so maybe to some my question may seem stupid. Please just hear me out.
I have created a script that loads 10 of the most recent data points into a Google Sheets doc from one of my BigQuery tables. Now, when I manually add new data points to the bottom of this table, I would like to have a load script run that uploads that new data back into BigQuery, and appends it to my original table. I read somewhere that just by inserting a new table the data is automatically appended if the table mentioned already exists. However, I haven't tested that part yet since I get stuck on an error earlier up the line..
Below is the load script I have loosely copied from https://developers.google.com/apps-script/advanced/bigquery
function loadCsv() {
var projectId = 'XX';
var datasetId = 'YY';
var tableId = 'daily_stats_testing';
var file = SpreadsheetApp.getActiveSpreadsheet();
var sheet = file.getActiveSheet().getRange("A12:AK10000").getValues();
var data = sheet.getBlob().setContentType('application/octet-stream');
var job = {
configuration: {
load: {
destinationTable: {
projectId: projectId,
datasetId: datasetId,
tableId: tableId
},
skipLeadingRows: 1
}
}
};
job = BigQuery.Jobs.insert(job, projectId, data);
var Msg = "Load started. Check status of it here:" +
"https://bigquery.cloud.google.com/jobs/%s", projectId
Logger.log(Msg);
Browser.msgBox(Msg);
return;
}
Now the error I get (in a variety of forms, since I've been testing stuff out) is that the BigQuery.Jobs function only accepts Blob data, and that the data from my current sheet (with rage A12 marking the first manually imputed row of data) is not a Blob recognizable data set.
Is there any way (any function?) I can use that will directly convert the selected data range and make it Blob compatible?
Does anyone have any recommendation on how to do this more efficiently?
Unfortunately the script has to load directly out of the current, open Spreadsheet sheet, since it is part of a larger script I will be running. Hopefully this isn't too much of a hinder!
Thanks in advance for the help :)
Is there any way (any function?) I can use that will directly convert the selected data range and make it Blob compatible?
There is actually a function that does convert objects into blob type, namely newBlob(data).
To test this I got a range from my spreadsheet and used it.
function blobTest(){
var sheet = SpreadsheetApp.getActiveSheet();
var range = sheet.getRange("A1:C1");
Logger.log(range); //here, range data is of type Range object
var blob = Utilities.newBlob(range);
Logger.log(blob); //here, range data is now of type Blob object
}

object has no method push in node js

I am trying to append the user details from the registration form to the json file so that the user-details can be used for authentication. the problem is i am not able append to the json file in correct format.The code i have tried so far is,
var filename= "./user_login.json";
var contents = fs.readFileSync(filename);
var jsonContent = JSON.parse(contents);
//sample data
var data =[
{
"try" : "till success"
}
];
jsonContent.push(data);
fs.writeFileSync(filename,jsonContent);
I have tried different methods that i found by googling and nothing worked so far. I want the data to be stored in correct format. Most of the times i got this error like object has no push function. So what is the alternative to that?
The correct format i am looking for is ,
[
user1-details : {
//user1 details
},
user2-deatils : {
}//So on
]
Object has no push function, arrays do. Your json is invalid too, it should be an array:
[ // here
{
//user1 details
},
{
//So on
}
] // and here
Now, you can use push(). However, data is an array, if you want an array of objects in your json file, it should be a simple object:
var data = {
"try" : "till success"
};
You also have to stringify the object before writing it back to the file:
fs.writeFileSync(filename, JSON.stringify(jsonContent));
You should consider using something like node-json-db, it will take care of reading/writing the file(s) and it gives you helper functions (save(), push()...).

Categories

Resources