while XLSX file downloded average total value showing 0 - javascript

After downloading the xlsx file, i need to sum the value and shows in average total count,
kindly help me to fix this issue. i have attached the screenshot of current situation is showing 0
TS code
public exportAsExcelFile(summary:any[], json: any[], excelFileName: string): void {
let report = "Global Service Desk SLA";
let ReportName = [{"Report":`Report Name : ${report}`}];
const worksheet: XLSX.WorkSheet = XLSX.utils.json_to_sheet(ReportName ,{skipHeader:true});
if(worksheet['A2'] == undefined){
worksheet['A2'] = {"t":"s","v":`Date Range : ${summary[0].FromDate +" - "+summary[0].ToDate}`};
}
if(worksheet['A3'] == undefined){
worksheet['A3'] = {"t":"s","v":`Bot : ${summary[0].Bot}`}
}
if(worksheet['A4'] == undefined){
worksheet['A4'] = {"t":"s","v":`Timezone : ${summary[0].timeZone}`}
}
const workbook: XLSX.WorkBook = { Sheets: { 'data': worksheet }, SheetNames: ['data']};
XLSX.utils.sheet_add_json(worksheet,json,{origin:"A7"});
const excelBuffer: any = XLSX.write(workbook, { bookType: 'xlsx', type: 'array' });
this.saveAsExcelFile(excelBuffer, excelFileName);
}
if i remove single 0 in that column cell the average total is showing please see in my screenshot how to fix in my TS code.
0 removed img
If iam select the queue time column like screenshot average total is showing 0 i need the sum of total values in Average section

Related

Google Sheets, stack report from multiple workbooks

Goal: To stack data from 90+ google workbooks, all with the same sheet name, into the one master sheet for reporting
Info:
All worksheets have the same number of columns.
I have the following script but it does not run properly, I think the issue is with how I am caching / Pushing the data to the array before pasting to the output sheet.
I am trying to build an array then paste it in one go.
The tables I am stacking have 47 columns, unknown number of rows.
The part that opens the sheets is all working perfectly.
// Get the data from the worksheets
var indexsheet = SpreadsheetApp.getActive().getSheetByName("Index");
var outputsheet = SpreadsheetApp.getActive().getSheetByName("Output");
var response = SpreadsheetApp.getUi().prompt('Current Cycle', 'Enter Cycle Name Exactly in YY-MMM-Cycle# format', SpreadsheetApp.getUi().ButtonSet.OK_CANCEL)
var CurrentCycleName = response.getResponseText()
// Assign datasets to variables
var indexdata = indexsheet.getDataRange().getValues();
// For each workbook in the index sheet, open it and copy the data to a cache
indexdata.forEach(function(row, r) {
try {
//open Entity specific workbook
var workbookid = indexsheet.getRange(r + 1, 7, 1, 1).getValues();
var Entityworkbook = SpreadsheetApp.openById(workbookid)
// Open workhseet
Entitysheet.getSheetByName(CurrentCycleName)
// Add PR Data to cache - stacking for all countrys
var PRDataCache = Entitysheet.getDataRange().push()
} catch {}
})
// Set the all values of the sheet at once
outputsheet.getRange(r + 1, 14).setValue('Issue Splitting Data')
Entitysheet.getRange(2, 1, PRDataCache.length || 1, 47).setValues(PRDataCache)
};
This is the index tab where we are getting the workbookid from to open each file
This is the output file, we are stacking all data from each country
I believe your goal is as follows.
You want to retrieve the Spreadsheet IDs from the column "G" of "Index" sheet.
You want to give the specific sheet name using a dialog.
You want to retrieve all values from the specification sheet in all Spreadsheets. In this case, you want to remove the header row.
You want to put the retrieved values on "Output" sheet.
In this case, how about the following sample script?
Sample script:
function myFunction() {
var ss = SpreadsheetApp.getActive();
var indexsheet = ss.getSheetByName("Index");
var outputsheet = ss.getSheetByName("Output");
var response = SpreadsheetApp.getUi().prompt('Current Cycle', 'Enter Cycle Name Exactly in YY-MMM-Cycle# format', SpreadsheetApp.getUi().ButtonSet.OK_CANCEL);
var CurrentCycleName = response.getResponseText();
var ids = indexsheet.getRange("G1:G" + indexsheet.getLastRow()).getValues();
var values = ids.reduce((ar, [id]) => {
try {
var [, ...values] = SpreadsheetApp.openById(id).getSheetByName(CurrentCycleName).getDataRange().getValues();
ar = [...ar, ...values];
} catch (e) {
console.log(`"${id}" was not found.`);
}
return ar;
}, []);
if (values.length == 0) return;
// If the number of columns is different in all Spreadsheets, please use the following script.
// var maxLen = Math.max(...values.map(r => r.length));
// values = values.map(r => r.length < maxLen ? [...r, ...Array(maxLen - r.length).fill("")] : r);
outputsheet.getRange(outputsheet.getLastRow() + 1, 1, values.length, values[1].length).setValues(values);
}
Note:
When the number of Spreadsheet IDs is large, the processing time might be over 6 minutes. I'm worried about this. At that time, how about separating the Spreadsheet IDs?
Reference:
reduce()

How can I write to a cell in a specific row in Google App Scripts inside of a conditional loop?

I would like to create a PDF for each row in my sheet that has not already had a PDF created. Im doing this by assigning a binary to rows based on their creation status and then conditioning my loop based on that binary.
My issue is that I cannot figure out how to set the value of the printed column to 1 after the loop iterates on that row.
function createBulkPDFs() {
const docFile = DriveApp.getFileById("1zGTNkzUr_ApaYSpqdu_PKcEuDrVm9r9kA_oyqfIRWeI");
const TempFolder = DriveApp.getFolderById("1h8RHp890f0HGsc-dulwEP9Urrpnq7xcw");
const pdfFolder = DriveApp.getFolderById("1boy1E2Ih3Cp3zMTak8nAUtmtc7e34Y1L");
const currentSheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("WorkOrders");
const data = currentSheet.getRange(2,1,currentSheet.getLastRow()-1,11).getDisplayValues();
data.forEach(row => {
var printed = row[10];
if (printed !== 1) {
createPDF(row[2], row[3], row[0], row[4], row[1], row[5],row[7], row[6], row[8], docFile, TempFolder, pdfFolder);
SpreadsheetApp.getActiveSheet().getRange(printed).setValue(1); // This is where my code fails. "Range not found"
}
});
}
Replace
data.forEach(row => {
and
SpreadsheetApp.getActiveSheet().getRange(printed).setValue(1);
by
data.forEach((row, i) => {
and
SpreadsheetApp.getActiveSheet().getRange(i + 2, 11).setValue(1);
respectively.
Resources
https://developers.google.com/apps-script/reference/spreadsheet/sheet#getrangerow,-column

How do I get elements of array generated by a XLSX module extracted excel file without getting undefined

I used the xlsx module in a node electron project to extract the content of an excel file
I am only able to extract the items in the first column successfully
But I am unable to extract the content of subsequent columns
Please advise me on how I can get the content of subsequent columns.
var XLSX = require("xlsx");
var workbook = XLSX.readFile(excelFile.path);
var sheet_name_list = workbook.SheetNames;//gives sheet name
var worksheet = workbook.Sheets[sheet_name_list];
var dataArray = XLSX.utils.sheet_to_json(worksheet);
alert("dataArray is : " + dataArray);
[![enter image description here][1]][1]
var newData = dataArray.map(function (record) {
alert("rec 1 :" + record[1])
alert("rec 2 :" + record[2])
I attached screenshots of the alert pop-ups generated by javascript 1]1 2]2 and also a screenshot of the xlsx 3 file being uploaded and extracted
You can see from the screenshot that I extracted the content of the first column using record1 which comes as 1
But the content of other columns come as undefined
Why do other columns come as undefined and how do I get the content of other columns
I was able to iterate using below code
const workbook = XLSX.readFile(excelFile.path);
var sheetNames = workbook.SheetNames;
// alert("sheetnames :> " + sheetNames);
var sheetIndex = 1;
var df = XLSX.utils.sheet_to_json(workbook.Sheets[sheetNames[sheetIndex-1]]);
// alert(JSON.stringify(df));
// alert(JSON.stringify(df, null, 4));
// alert("df Length :> " + df.length);
df.forEach(function (arrayItem) {
var x = arrayItem;
alert(JSON.stringify(x.ID));
// alert(JSON.stringify(x, null, 4));
});

Convert from csv file to an array of specific columns chosen by column name

I am new to nodejs and try to write a program which read a csv file and can select data from csv_file by columns names and convert them into arrays.
I figured how to read the csv file and convert the columns into arrays(Thanks internet), but the columns are selected by the index number rather than the names.
code:
var csv = require('csv');
var csv_obj = csv();
function Column_data(signal_name, signal_type, initial_value, minimum, maximum ) {
this.signal_name = signal_name;
this.signal_type = signal_type;
this.initial_value = initial_value;
this.minimum = minimum;
this.maximum = maximum;
};
var csv_data = [];
csv_obj.from.path('../data_files/Signals_Info.csv).to.array(function (data) {
for (var row = 0; row < data.length; index++) {
csv_data.push(new Column_data(data[row][0], data[row][1], data[row][2], data[row][3], data[row][4]));
}
console.log(csv_data);
});
Here I'm using index values 1, 2, 3 to access columns 1, 2 and 3 in my csv. I want to be able to access columns by name rather than by index number because index can change.
The following code gives you two options. One is using the 4.0.0 version of the csv library (npm install csv), the other is doing it by hand. Both return the same results given the sample input included below:
const csv = require('csv')
const data = `One,Two,Three
1,2,3
4,5,6`
// Using the csv library:
csv.parse(
data,
{ columns: true },
(err, result) => console.log(result)
)
// Doing it manually:
const rowToObject = (headers, cells) =>
headers.reduce(
(acc, header, i) => {
acc[header] = cells[i]
return acc
},
{}
)
const csvToObjects = file => {
const [headerRow, ...dataRows] = file.split('\n')
const headers = headerRow.split(',')
return dataRows.map(
row => rowToObject(headers, row.split(','))
)
}
console.log(csvToObjects(data))
// both options output [{One:1,Two:2,Three:3},{One:4,Two:5,Three:6}]
You can see both of these running in this runkit - apologies, I'm not able to run StackOverflow snippets in my browser at this time.
Here I'll mention a third option: It seems you're using an older version of the csv package. In addition to csv().from.path(path).to.array(options, callback), the module also offers a to.object(options, callback) method, but I'm having trouble finding the documentation on the older version (and don't even know exactly which version you're currently using, which makes things even more difficult).
There is no way to access the columns by the column_names. When the csv is read, it's just all comma-separated. Instead, you can change the csv output to be read by the column names, like -
csv_obj.from.path('../data_files/Signals_Info.csv).to.array(function (data) {
csv_data['signal_name'] = [];
csv_data['signal_type'] = [];
csv_data['initial_value'] = [];
for (var index = 1; index < data.length; index++) {
csv_data['signal_name'].push(data[index][0]);
csv_data['signal_type'].push(data[index][1]);
csv_data['initial_value'].push(data[index][2]);
...
}
console.log(csv_data);
});
Try using Papaparse
and making a reader function like this first -
function reader (csv) {
return new Promise((resolve, reject) => {
if (csv) {
parser(csv, {
...config, // . this could be as you want it to - read the docs
complete:(response) => resolve(response.data),
error:(error) => reject(error),
})
}
})
}
contents = reader(csv)
csv_obj = contents.map((row) => ({
signal_name: row[0],
signal_type: row[1],
initial_value: row[2],
minimum: row[3],
maximum: row[4],
}))

Table not autoloading when scrolling

I am using w2ui to display a table of a django model. Instead of loading all the elements at once I am using autoLoading to load a 100 elements at a time. Below is the code for the table:
var config = {
grid: {
name: "grid",
url: "retrieveData/",
show: {
footer:true,
toolbar:true
},
header: "List of RTNs",
columns: [
{ field:"number", caption:"Number", size:"30%" },
{ field:"name", caption:"Name", size:"30%" },
{ field:"release", caption:"Release", size:"30%" }
]
}
}
$(function() {
$("#grid").w2grid(config.grid);
});
The code that handles the json request is done via a django view, below is the code for it:
#csrf_exempt
def retrieveData(request):
cmd = request.POST.get("cmd", False)
if cmd == "get-records":
offset = int(request.POST.get("offset", False))
limit = int(request.POST.get("limit", False))
entries = Data.objects.all()[offset:limit+offset]
json_list = {"status":"success"}
records = []
def notNone(x):
if x != None and x != "":
return x.strftime("%Y-%m-%dT%H:%M:%S")
else:
return ""
for entry in entries:
records.append({
"recid":entry.id,
"number":entry.number,
"name":entry.name,
"release":entry.release,})
total = len(records)
json_list["total"] = total
json_list["records"] = records
return HttpResponse(json.dumps(json_list), content_type="application/json")
else:
json_list = {"status":"error"}
json_list["message"] = "CMD: {0} is not recognized".format(cmd)
json_list["postData"] = request.GET
return HttpResponse(json_dumps(json_list), content_type="application/json")
The table is able to retrieve the first 100 elements, but when I scroll all the way to the bottom the table does not load more elements. Instead of loading more elements it does nothing. I turned off autoLoad, but this still didn't do anything (the "Load More" button did not appear). There are a thousand elements in my table.
There are no errors being reported, and everything seems to be working except that it is not loading more elements when I scroll.
I am following the example below from the w2ui site:
http://w2ui.com/web/demos/#!combo/combo-9
The way total is being set at line
json_list["total"] = total
is wrong. Because it is saying that the total amount of elements is 100, even though you have more than 100 elements. "total" is used to indicate the total amount of elements you have not the total amount of elements you are sending in the json response.
Change the code to the following:
#csrf_exempt
def retrieveData(request):
cmd = request.POST.get("cmd", False)
if cmd == "get-records":
offset = int(request.POST.get("offset", False))
limit = int(request.POST.get("limit", False))
--> entries = Data.objects.all()
--> total = len(entries)
--> entries = entries[offset:limit+offset]
json_list = {"status":"success"}
records = []
def notNone(x):
if x != None and x != "":
return x.strftime("%Y-%m-%dT%H:%M:%S")
else:
return ""
for entry in entries:
records.append({
"recid":entry.id,
"number":entry.number,
"name":entry.name,
"release":entry.release,})
json_list["total"] = total
json_list["records"] = records
return HttpResponse(json.dumps(json_list), content_type="application/json")
else:
json_list = {"status":"error"}
json_list["message"] = "CMD: {0} is not recognized".format(cmd)
json_list["postData"] = request.GET
return HttpResponse(json_dumps(json_list), content_type="application/json")

Categories

Resources