What is the quickest way to navigate an xml document in JS? - javascript

I am working with a semi-large xml document (~4000 elements each with 30 sub nodes) and was wondering what the fastest way to pull the data is. Currently, my code is taking about 4 seconds to run, which isn't terrible, but it could be better.
I know with sql databases you can use Ordinals to get integer values to pull from the table due to string look-ups being inefficient and I was wondering if there is any way to do this with XML. Or if there is anything else I can look into / try.
My current implementation is pulling each value using .getElementsByTagName
root[i].getElementsByTagName('FirstName')[0].childNodes[0].nodeValue);
Edit: show full code
I am pretty much just copy pasting that for all the nodes in each element. Note: there are a lot more elements, but it is the exact same implementation.
const AddUserData_Response = (xml) =>{
let xmlDoc = xml.responseXML;
let root = xmlDoc.getElementsByTagName('User');
let table = document.getElementsByClassName(activityTab + "-table-body")[0];
for (let i = 0; i < root.length; i++) {
let row = document.createElement("tr");
CreateTableElement(row, root[i].getElementsByTagName('LastName')[0].childNodes[0].nodeValue);
CreateTableElement(row, root[i].getElementsByTagName('FirstName')[0].childNodes[0].nodeValue);
CreateTableElement(row, root[i].getElementsByTagName('MiddleInitial')[0].childNodes[0].nodeValue);
CreateTableElement(row, root[i].getElementsByTagName('ID')[0].childNodes[0].nodeValue);
CreateTableElement(row, root[i].getElementsByTagName('Title')[0].childNodes[0].nodeValue);
table.insertBefore(row, table.children[0]);
}
}
const CreateTableElement = (row, value) => {
let cell = document.createElement("td");
if (value != "None" && value != undefined) {
cell.innerText = value;
}
row.appendChild(cell);
}

Related

Grab data from website HTML table and transfer to Google Sheets using App-Script

Ok, I know there are similar questions out there to mine, but so far I have yet to find any answers that work for me. What I am trying to do is gather data from an entire HTML table on the web (https://www.sports-reference.com/cbb/schools/indiana/2022-gamelogs.html) and then parse it/transfer it to a range in my Google Sheet. The code below is probably the closest thing I've found so far because at least it doesn't error out, but it will only find one string or value, not the whole table. I've found other answers where they use xmlservice.parse, however that doesn't work for me, I believe because the HTML format has issues that it can't parse. Does anyone have an idea of how to edit what I have below, or a whole new idea that may work for this website?
function SAMPLE() {
const url="http://www.sports-reference.com/cbb/schools/indiana/2022-gamelogs.html#sgl-basic?"
// Get all the static HTML text of the website
const res = UrlFetchApp.fetch(url, {muteHttpExceptions: true}).getContentText();
// Find the index of the string of the parameter we are searching for
index = res.search("td class");
// create a substring to only get the right number values ignoring all the HTML tags and classes
sub = res.substring(index+92,index+102);
Logger.log(sub);
return sub;
}
I understand that I can use importHTML natively in a Google Sheet, and that's what I'm currently doing. However I am doing this for over 350 webpage tables, and iterating through each one to load it and then copy the value to another sheet. App Script bogs down quite a bit when it is repeatedly waiting on Sheets to load an importHTMl and then grab some data and do it all over again on another url. I apologize for any formatting issues in this post or things I've done wrong, this is my first time posting here.
Edit: ok, I've found a method that works, but it's still much slower than I would like, because it is using Drive API to create a document with the HTML data and then parse and create an array from there. The Drive.Files.Insert line is the most time consuming part. Anyone have an idea of how to make this quicker? It may not seem that slow to you right now, but when I need to do this 350 times, it adds up.
function parseTablesFromHTML() {
var html = UrlFetchApp.fetch("https://www.sports-reference.com/cbb/schools/indiana/2022-gamelogs.html");
var docId = Drive.Files.insert(
{ title: "temporalDocument", mimeType: MimeType.GOOGLE_DOCS },
html.getBlob()
).id;
var tables = DocumentApp.openById(docId)
.getBody()
.getTables();
var res = tables.map(function(table) {
var values = [];
for (var row = 0; row < table.getNumRows(); row++) {
var temp = [];
var cols = table.getRow(row);
for (var col = 0; col < cols.getNumCells(); col++) {
temp.push(cols.getCell(col).getText());
}
values.push(temp);
}
return values;
});
Drive.Files.remove(docId);
var range=SpreadsheetApp.getActive().getSheetByName("Test").getRange(3,6,res[0].length,res[0][0].length);
range.setValues(res[0]);
SpreadsheetApp.flush();
}
Solution by formula
Try
=importhtml(url,"table",1)
Other solution by script
function importTableHTML() {
var url = 'https://www.sports-reference.com/cbb/schools/indiana/2022-gamelogs.html'
var html = '<table' + UrlFetchApp.fetch(url, {muteHttpExceptions: true}).getContentText().replace(/(\r\n|\n|\r|\t| )/gm,"").match(/(?<=\<table).*(?=\<\/table)/g) + '</table>';
var trs = [...html.matchAll(/<tr[\s\S\w]+?<\/tr>/g)];
var data = [];
for (var i=0;i<trs.length;i++){
var tds = [...trs[i][0].matchAll(/<(td|th)[\s\S\w]+?<\/(td|th)>/g)];
var prov = [];
for (var j=0;j<tds.length;j++){
donnee=tds[j][0].match(/(?<=\>).*(?=\<\/)/g)[0];
prov.push(stripTags(donnee));
}
data.push(prov);
}
return(data);
}
function stripTags(body) {
var regex = /(<([^>]+)>)/ig;
return body.replace(regex,"");
}

Generate new table when searching

I am trying to generate a table from an array with a searching feature. With every letter typed in the search bar the items that contain that specific string will be displayed.
I have come to the conclusion to generate the whole table with every keystroke rather than editing the current.
This is where I am at:
What´s currently typed in the searchbar:
let searchBar = document.getElementById('search-input');
let value = " ";
searchBar.addEventListener(`keyup`, function(){
value = this.value
How I make the table:
let tableUsers = document.getElementById("tabell");
function drawTable() {
let table = document.createElement("table");
let tableHead = document.createElement("thead");
let colHeads = ["Name"];
for (let header of colHeads) {
let cell = document.createElement("th")
cell.innerHTML = header;
tableHead.appendChild(cell);
}
table.appendChild(tableHead)
for (let x of people) {
let row = document.createElement("tr");
let name = document.createElement("td");
name.innerHTML = x.name.first + "&nbsp" + x.name.last;
row.appendChild(name);
table.appendChild(row);
}
tableUsers.appendChild(table);
}
drawTable()
I am trying this:
let str = x.name.first.toLowerCase()
if (str.includes(value)){
//code
}
Is it possible to do it this way? Or possible at all using JS and large arrays without using a lot of pc resources?
Any help is greatly appreciated!
Inside if statement, you need to create new array then push the values inside it then you can pass a new people as a parameter to drawTable function and call it like drawTable(people)

Optimization of code while working with range object in excel

I have recently moved for office add-in from vb.net to JavaScript and Office.js. In General, I have observed that JavaScript add-in is way too fast than to vb.net.
In one of the operations, I am not able to get benefit of speed in JavaScript add-in. Maybe it is due to inadequate code that I am using!
I have one master sheet which contains a large table, first column of table is name of every other sheets of that workbook and first row of table have address of cells. Rest of data in table are values to be transferred at the sheet name defined in the first column and cell adress defined in the first row.
In coding, I am creating an array of range object for each value indicated in table and then I run context. Sync () function to restore values in every sheets.
In my real-life application data in table can be 10K to 50K, I can see time taken for this operation is about approx. one minute (for 10K). In contrast of this, I can create the table (master sheet) within few seconds (5 -6 sec) only.
Is there any other workaround or suggestion to reduce time?
/* global Excel, console*/
export default async function restoreData() {
var allRollback = false;
await Excel.run(async (context) => {
var sheets = context.workbook.worksheets.load("items/name");
var wsdb = context.workbook.worksheets.getItem("db");
const arryRange = wsdb.getUsedRange();
var addRow = 0;
var sheetName = [];
var rangeObj = [];
//Get last row/column from used range
arryRange.load(["rowCount", "columnCount", "values", "address"]);
await context.sync();
sheets.items.forEach((sheet) => sheetName.push(sheet.name));
for (let iRow = 0; iRow < arryRange.rowCount; iRow++) {
if (arryRange.values[iRow][0] == "SheetName/CellAddress") {
addRow = iRow;
}
if (sheetName.indexOf(arryRange.values[iRow][0]) != -1) {
for (let iCol = 1; iCol < arryRange.columnCount; iCol++) {
if (arryRange.values[addRow][iCol]) {
const sheet = context.workbook.worksheets.getItem(arryRange.values[iRow][0]);
const range = sheet.getRange(arryRange.values[addRow][iCol]);
range.values = arryRange.values[iRow][iCol];
rangeObj.push(range);
}
}
} else {
// code for higlight Row in db
console.log("Y");
}
}
console.log("Range object created");
await context.sync();
// console.log(arryRange.rowCount);
// console.log(arryRange.columnCount);
console.log("done");
// Copy a range starting at a single cell destination.
});
allRollback = true;
return allRollback;
}
First, base on your statement, I assume you have a table in master sheet. This table's heading is like ["sheetName","A13","D23",...] ("A13" and "D23" are examples of cell address). In each row of this table, contain sheet's name and some values. The sheet's name may not related to a real sheet's name(not exist), and values may contain some blank. And you want to set values on other sheets based on the information given by master sheet's table.
Then I have some suggestions based on my assumptions and your code.
Move unchanged value out of loops.
For example, you called const sheet = context.workbook.worksheets.getItem(arryRange.values[iRow][0]);. We can move context.workbook.worksheets out of loops by define var sheets = context.workbook.worksheets and const sheet = sheets.getItem(arryRange.values[iRow][0]). Which could increase the performance.
Also some reused values like arryRange.values[iRow][0], arryRange.values[0][iCol] can be moved out of loop.
Seems you use arryRange.values[addRow][iCol] only for get the address in table's heading. You can replace it by arryRange.values[0][iCol].
Below is the code I rewrite, just for reference, it may not fully satisfy what you need.
export default async function restoreData() {
var allRollback = false;
await Excel.run(async (context) => {
var sheets = context.workbook.worksheets.load("items/name");
var wsdb = context.workbook.worksheets.getItem("db");
const arryRange = wsdb.getUsedRange();
//var addRow = 0;
var sheetName = [];
var rangeObj = [];
//Get last row/column from used range
arryRange.load(["rowCount", "columnCount", "values", "address"]);
await context.sync();
sheets.items.forEach((sheet) => sheetName.push(sheet.name));
var cellAddress, curSheetName;
const mySheets = context.workbook.worksheets;
for (let iRow = 0; iRow < arryRange.rowCount; iRow++) {
curSheetName = arryRange.values[iRow][0]
if (sheetName.indexOf(curSheetName) != -1) {
for (let iCol = 1; iCol < arryRange.columnCount; iCol++) {
cellAddress = arryRange.values[0][iCol];
if (cellAddress) {
const sheet = mySheets.getItem(curSheetName);
const range = sheet.getRange(cellAddress);
range.values = arryRange.values[iRow][iCol];
rangeObj.push(range);
}
}
} else {
// code for higlight Row in db
console.log("Y");
}
}
console.log("Range object created");
await context.sync();
// console.log(arryRange.rowCount);
// console.log(arryRange.columnCount);
console.log("done");
// Copy a range starting at a single cell destination.
});
allRollback = true;
return allRollback;
}
More references:
https://learn.microsoft.com/en-us/office/dev/add-ins/excel/performance?view=excel-js-preview
https://learn.microsoft.com/en-us/office/dev/add-ins/concepts/correlated-objects-pattern
with refer of you assumption, please note that master sheet was crated based on actual workbook with user selected area + deletion of empty value columns in master sheet (I,e empty cell at the adress at each sheet). sheet's name will be as same as actual sheet name unless user don’t change value in master sheet by accident.
With reference to yours,
Suggestions 1) I believe that Move unchanged value out of loops will be the key to my problem. I will refabricate one range and get data comparison for changes. I believe speed will be approved drastically in best case scenario. (I will also have some worst-case scenario (less than 5% case), where I will be required to write every values of master sheet).
Suggestions 2) I am planning to have a new functionality, which may have more rows as address in, that is the reason I have I am keep looking for address rows.
Thanks for your reply.

Ridiculously slow Apps-Script Loop

I've just switched from excel to Google sheets and I've had to go through a bit of a learning curve with moving on with "Macros" or scripts as they're now called.
Anyway, a short while later I've written a loop to go through everything in column B and if it's less than 50, delete the row.
It works and I'm happy but it's so slow. I have about 16,000 rows and I'll probably end with more. I let it run for about 4 minutes and it didn't even get rid of 1,000 rows. I refuse to believe that a popular programming language is that slow I can still read stuff as it's being deleted 20 rows up.
function grabData(){
let sheet = SpreadsheetApp.getActive().getSheetByName("Keywords");
var rangeData = sheet.getDataRange();
var lastColumn = rangeData.getLastColumn();
var lastRow = rangeData.getLastRow();
let range = sheet.getRange("B2:B16000");
let values=range.getValues();
for (var i = 0, len = values.length; i<len; i++){
if(values[i] <= 50 ){
sheet.deleteRow(i);
i--
len--
};
};
}
I keep seeing somewhere that something's not being reset, but I have no idea what that means.
Is it because the array length starts off at 16,000 and when I delete a row I'm not accounting for it properly?
Since I never use formulas I would do it this way:
function grabData() {
let ss = SpreadsheetApp.getActive();
let sh = ss.getSheetByName("Keywords");
let rg = s.getRange(2, 2, sh.getLastRow() - 1, sh.getLastColumn());
let values = rg.getValues();
let oA = [];
values.forEach((r, i) => {
if (r[0] > 50) {
oA.push(r);
}
});
rg.clearContent();
sh.getRange(2,1,oA.length,oA[0].length).setValues(oA);
}
It's much faster but it will probably mess up your formulas. Which is one of the reasons I never use formulas. Deleting lines is quite slow. Pretty much anything you do with the UI is slow.
Welcome to App Script and the community! App Script is actually very fast if follow the best practice of App Script.
Here is an example for you that will complete what you need in one second (*modify the variable value in config to fit your own application):
function myFunction() {
// config
const filterValue = 50
const targetSheetName = "Sheet1"
const targetColumn = "A"
const startRowNum = "2"
// get data from target sheet
const ss = SpreadsheetApp.getActiveSpreadsheet()
const sheet = ss.getSheetByName(targetSheetName)
const endRowNum = sheet.getLastRow()
const targetRange =`${targetColumn + startRowNum }:${targetColumn + endRowNum}`
const data = sheet.getRange(targetRange).getValues()
// filter data based on filterValue and set filtered result into new ary
const ary = data.filter(row=>row[0]>=filterValue)
//get max row number in the sheet
const maxRowNum = sheet.getMaxRows()
// break if nothing is filtered out
if(ary.length===0){
// remove all row and break
let deleteStartFromRowNum = parseInt(startRowNum,10) - 1
let deleteRowsCount = maxRowNum - deleteStartFromRowNum
sheet.deleteRows(deleteStartFromRowNum, deleteRowsCount)
return
}
// break if all is filtered out
if(ary.length===data.length){
// remove all trailing empty rows
if(endRowNum<maxRowNum){
let deletStartFromRowNum = endRowNum+1
let deleteRowsCount = maxRowNum-endRowNum
sheet.deleteRows(deletStartFromRowNum,deleteRowsCount)
}
return
}
// get lowerbound (the last row of filtered data in ary)
const lowerBound = parseInt(startRowNum,10) + ary.length - 1
// set ary into sheet range according to lowerBound value
sheet.getRange(`${targetColumn + startRowNum}:${targetColumn + lowerBound.toString()}`).setValues(ary)
// delete rest of the rows that are below lower bound
let deleteStartFromRowNum = lowerBound + 1
let deleteRowsCount = maxRowNum - lowerBound
sheet.deleteRows(deleteStartFromRowNum, deleteRowsCount)
return
}
Issue:
In Apps Script, you want to minimize calls to other service, including requests to Spreadsheets (see Minimize calls to other services). Calling other services in a loop will slow down your script considerably.
Because of this, it's much preferrable to filter out the undesired rows from values, remove all existing data in the range via Range.clearContent(), and then use setValues(values) to write the filtered values back to the spreadsheet (see Use batch operations).
Code snippet:
function grabData(){
let sheet = SpreadsheetApp.getActive().getSheetByName("Keywords");
const range = sheet.getRange("B2:B16000");
const values = range.getValues().filter(val => val[0] > 50);
range.clearContent();
sheet.getRange(2,2,values.length).setValues(values);
}
Reference:
Best Practices

Javascript performance optimization

I created the following js function
function csvDecode(csvRecordsList)
{
var cel;
var chk;
var chkACB;
var chkAF;
var chkAMR;
var chkAN;
var csvField;
var csvFieldLen;
var csvFieldsList;
var csvRow;
var csvRowLen = csvRecordsList.length;
var frag = document.createDocumentFragment();
var injectFragInTbody = function () {tblbody.replaceChild(frag, tblbody.firstElementChild);};
var isFirstRec;
var len;
var newEmbtyRow;
var objCells;
var parReEx = new RegExp(myCsvParag, 'ig');
var tblbody;
var tblCount = 0;
var tgtTblBodyID;
for (csvRow = 0; csvRow < csvRowLen; csvRow++)
{
if (csvRecordsList[csvRow].startsWith(myTBodySep))
{
if (frag.childElementCount > 0)
{
injectFragInTbody();
}
tgtTblBodyID = csvRecordsList[csvRow].split(myTBodySep)[1];
newEmbtyRow = getNewEmptyRow(tgtTblBodyID);
objCells = newEmbtyRow.cells;
len = newEmbtyRow.querySelectorAll('input')[0].parentNode.cellIndex; // Finds the cell index where is placed the first input (Check-box or button)
tblbody = getElById(tgtTblBodyID);
chkAF = toBool(tblbody.dataset.acceptfiles);
chkACB = toBool(tblbody.dataset.acceptcheckboxes) ;
chkAN = toBool(tblbody.dataset.acceptmultiplerows) ;
tblCount++;
continue;
}
csvRecordsList[csvRow] = csvRecordsList[csvRow].replace(parReEx, myInnerHTMLParag); // Replaces all the paragraph symbols ¶ used into the db.csv file with the tag <br> needed into the HTML content of table cells, this way will be possible to use line breaks into table cells
csvFieldsList = csvRecordsList[csvRow].split(myEndOfFld);
csvFieldLen = csvFieldsList.length;
for (csvField = 0; csvField < csvFieldLen; csvField++)
{
cel = chkAN ? csvField + 1 : csvField;
if (chkAF && cel === 1) {objCells[cel].innerHTML = makeFileLink(csvFieldsList[csvField]);}
else if (chkACB && cel === len) {objCells[cel].firstChild.checked = toBool(csvFieldsList[csvField]);}
else {objCells[cel].innerHTML = csvFieldsList[csvField];}
}
frag.appendChild(newEmbtyRow.cloneNode(true));
}
injectFragInTbody();
var recNum = getElById(tgtTblBodyID).childElementCount;
customizeHtmlTitle();
return csvRow - tblCount + ' (di cui '+ recNum + ' record di documenti)';
}
More than 90% of records could contain file names that have to be processed by the following makeFileLink function:
function makeFileLink(fname)
{
return ['<a href="', dirDocSan, fname, '" target="', previewWinName, '" title="Apri il file allegato: ', fname, '" >', fname, '</a>'].join('');
}
It aims to decode a record list from a special type of *.db.csv file (= a comma-separated values where commas are replaced by another symbol I hard-coded into the var myEndOfFld). (This special type of *.db.csv is created by another function I wrote and it is just a "text" file).
The record list to decode and append to HTML tables is passed to the function with its lone parameter: (csvRecordsList).
Into the csv file is hosted data coming from more HTML tables.
Tables are different for number of rows and columns and for some other contained data type (which could be filenames, numbers, string, dates, checkbox values).
Some tables could be just 1 row, others accept more rows.
A row of data has the following basic structure:
data field content 1|data field content 2|data field content 3|etc...
Once decoded by my algorithm it will be rendered correctly into the HTML td element even if into a field there are more paragraphs. In fact the tag will be added where is needed by the code:
csvRecordsList[csvRow].replace(par, myInnerHTMLParag)
that replaces all the char I choose to represent the paragraph symbol I have hard-coded into the variable myCsvParag.
Isn't possible to know at programming time the number of records to load in each table nor the number of records loaded from the CSV file, nor the number of fields of each record or what table field is going to contain data or will be empty: in the same record some fields could contain data others could be empty. Everything has to be discovered at runtime.
Into the special csv file each table is separated from the next by a row witch contains just a string with the following pattern: myTBodySep = tablebodyid where myTBodySep = "targettbodydatatable" that is just a hard coded string of my choice.
tablebodyid is just a placeholder that contains a string representing the id of the target table tbody element to insert new record in, for example: tBodyDataCars, tBodyDataAnimals... etc.
So when the first for loop finds into the csvRecordsList a string staring with the string into the variable myTBodySep it gets the tablebodyid from the same row: this will be the new tbodyid that has to be targeted for injecting next records in it
Each table is archived into the CSV file
The first for loop scan the csv record list from the file and the second for loop prepare what is needed to compile the targeted table with data.
The above code works well but it is a little bit slow: in fact to load into the HTML tables about 300 records from the CSV file it takes a bit more of 2.5 seconds on a computer with 2 GB ram and Pentium core 2 4300 dual-core at 1800 MHz but if I comment the row that update the DOM the function needs less than 0.1 sec. So IMHO the bottle neck is the fragment and DOM manipulating part of the code.
My aim and hope is to optimize the speed of the above code without losing functionalities.
Notice that I'm targeting just modern browsers and I don't care about others and non standards-compliant browsers... I feel sorry for them...
Any suggestions?
Thanks in advance.
Edit 16-02.2018
I don't know if it is useful but lastly I've noticed that if data is loaded from browser sessionstorage the load and rendering time is more or less halved. But strangely it is the exact same function that loads data from both file and sessionstorage.
I don't understand why of this different behavior considering that the data is exactly the same and in both cases is passed to a variable handled by the function itself before starting checking performance timing.
Edit 18.02.2018
Number of rows is variable depending on the target table: from 1 to 1000 (could be even more in particular cases)
Number of columns depending on the target table: from 10 to 18-20
In fact, building the table using DOM manipulations are way slower than simple innerHTML update of the table element.
And if you tried to rewrite your code to prepare a html string and put it into the table's innerHTML you would see a significant performance boost.
Browsers are optimized to parse the text/html which they receive from the server as it's their main purpose. DOM manipulations via JS are secondary, so they are not so optimized.
I've made a simple benchmark for you.
Lets make a table 300x300 and fill 90000 cells with 'A'.
There are two functions.
The first one is a simplified variant of your code which uses DOM methods:
var table = document.querySelector('table tbody');
var cells_in_row = 300, rows_total = 300;
var start = performance.now();
fill_table_1();
console.log('using DOM methods: ' + (performance.now() - start).toFixed(2) + 'ms');
table.innerHTML = '<tbody></tbody>';
function fill_table_1() {
var frag = document.createDocumentFragment();
var injectFragInTbody = function() {
table.replaceChild(frag, table.firstElementChild)
}
var getNewEmptyRow = function() {
var row = table.firstElementChild;
if (!row) {
row = table.insertRow(0);
for (var c = 0; c < cells_in_row; c++) row.insertCell(c);
}
return row.cloneNode(true);
}
for (var r = 0; r < rows_total; r++) {
var new_row = getNewEmptyRow();
var cells = new_row.cells;
for (var c = 0; c < cells_in_row; c++) cells[c].innerHTML = 'A';
frag.appendChild(new_row.cloneNode(true));
}
injectFragInTbody();
return false;
}
<table><tbody></tbody></table>
The second one prepares html string and put it into the table's innerHTML:
var table = document.querySelector('table tbody');
var cells_in_row = 300, rows_total = 300;
var start = performance.now();
fill_table_2();
console.log('setting innerHTML: ' + (performance.now() - start).toFixed(2) + 'ms');
table.innerHTML = '<tbody></tbody>';
function fill_table_2() {// setting innerHTML
var html = '';
for (var r = 0; r < rows_total; r++) {
html += '<tr>';
for (var c = 0; c < cells_in_row; c++) html += '<td>A</td>';
html += '</tr>';
}
table.innerHTML = html;
return false;
}
<table><tbody></tbody></table>
I believe you'll come to some conclusions.
I've got two thoughts for you.
1: If you want to know which parts of your code are (relatively) slow you can do very simple performance testing using the technique described here. I didn't read all of the code sample you gave but you can add those performance tests yourself and check out which operations take more time.
2: What I know of JavaScript and the browser is that changing the DOM is an expensive operation, you don't want to change the DOM too many times. What you can do instead is build up a set of changes and then apply all those changes with one DOM change. This may make your code less nice, but that's often the tradeoff you have when you want to have high performance.
Let me know how this works out for you.
You should start by refactoring your code in multiples functions to make it a bit more readable. Make sure that you are separating DOM manipulation functions from data processing functions. Ideally, create a class and get those variables out of your function, this way you can access them with this.
Then, you should execute each function processing data in a web worker, so you're sure that your UI won't get blocked by the process. You won't be able to access this in a web worker so you will have to limit it to pure "input/output" operations.
You can also use promises instead of homemade callbacks. It makes the code a bit more readable, and honestly easier to debug. You can do some cool stuff like :
this.processThis('hello').then((resultThis) => {
this.processThat(resultThis).then((resultThat) => {
this.displayUI(resultThat);
}, (error) => {
this.errorController.show(error); //processThat error
});
}, (error) => {
this.errorController.show(error); //processThis error
});
Good luck!

Categories

Resources