Efficient way to generate report data - javascript

I have a some report data available with me which i am fetching from say a webService.
The data is in JSON format of the form :
Year -> Region ->Items == has the data for 12 months(a simple Array)
(3) -> (3) -> (10)
i.e. 3 Year, each contains 5 regions, and each region has say 10 products.
what is the efficient way to generate various representations of this data in javascript.
or What methodologies are to be adopted for report generation of such a kind.
I am Building a BI solution using javascript.
My Application uses the data in different formats for the respective components.
Switching between these components constantly involves regeneration of data.
Hence, either i need the datasets for these components to be generated in advance or have an efficient way to do it on the go.

i need the datasets for these components to be generated in
advance
I believe this is a simple straightforward approach; aggreggate and store your data in separate tables before hand.
This would require a cron or some sort of middle process. Instead of processing and generating reports in javascript on each page load data would be retrieved aggregated and stored in a seperate table.
javascript would then only need to retrieve and present the data in the aggregated table.

Related

Most memory efficient way to store very large array of objects in javascript

I am trying to write an embedded javascript code in an OBIEE report. Basically the idea of the report is to take tabular data, (rows and columns) into the report and have the report provide a way to extract user specified columns from it and download resulting data in CSV or Excel file. I'm trying to do this by storing data as an array of objects. So something like this:-
[
{'column1':'Entry1','column2':'Entry2',...}
,{'column1':'Entry1','column2':'Entry2',...}
,....
]
Problem is I'm get a C-runtime error (std::bad_alloc) which I'm assuming is because of running out of memory because it works when I take in less number of rows. The expected data is to be a maximum of about 200 columns (could be empty or non empty) and 1-2 million rows. What is the most memory efficient way to store such data, one copy of full data and then one copy of data with only the required columns? I can't post exact code here due to security reasons as it's on a work laptop on secure server.

Storing and retrieving data using MySQL for NxN structure in the most optimized and less time consuming way

I have a form like below which contains dynamic columns and rows which can be added and deleted as per the user.
Assuming the number of rows being 50 to 60 and columns upto 10, there are a lot of calculations taking place in the javascript.
I am using MySQL database here and php(Laravel) as backend.
Currently, my table structure is as given below to store the above:-
Table 1 - row_headers (row_id,row_name)
Table 2 - column_headers (column_id, column_name)
Table 3 - book_data ( book_id, row_id, column_id, data_value)
The above set of tables do suffice the data storing, but is extremely slow with respect to Store call as well as get call. While getting the complete data back to the UI there is much load on the database as well as HTML to load the data properly(for loops kill all the time) and also the same is a tedious process.
I want to understand how to optimize the above? What table structure should be used other than the above and what is the best way to reduce the load at backend as well as frontend?
Help appreciated.

How do I parse large amounts of data in XLSX with Javascript?

I want to process some 48,000 rows to build a dashboard and show some stats based on the data in those rows. One particular field which has a length of 30 characters also has some data in form of substrings. How do I parse all of this data, row by row, to come up with the end result? There are plenty of examples out there, couldn't relate to them just as well.
I'm using the "js-xlsx" library in one of my application. The performance is considerably seems to be good.
Here is the github URL.
https://github.com/SheetJS/js-xlsx

HTML/Javascript + CSVs - Using filters vs. Loading many CSVs

I'm thinking about what is the best way to feed a chart with filtered data. The thing is that I have year and month filters among others and I want the chart to be quick on showing up and switching data on filter change.
Is it better to prepare already filtered data into individual CSVs and load them from server as needed or should I use Javascript to filter data on client side using a big CSV?
Individual files:
- Load quickly
- No client-side computations
Big CSV:
- Avoids network connection (loaded once)
If I choose individual files I would have A LOT of them as filters create many combinations. I don't know if there is any drawback in that case. I think the individual files are the option with the highest performance.

What is the best method for mapping multiple point datasets with filters?

I have 5 point datasets and 1 polygon dataset. I want to map them and have checkboxes to filter each dataset by 2 fields each.
What is the best method, or available template? I would like to use geojson or csv for data.
The best method would be not to strain your client with that task but to have some sort of API which returns the filtered results. Generate requests for your datasets based on your inputs, for example: /api/dataset/1?foo=true&bar=false or /api/dataset/3?foo=false&bar=true and then let your server return the appropriate results. That way your client doesn't have to download the entire resultsets, when they might not be needed (which is faster) and doesn't have to do the filtering (which is slower). You application will feel much quicker and more responsive that way.

Categories

Resources