I take a fat JSON array from the server via an AJAX call, then process it and render HTML with Javascript. What I want is to make it as fast as humanly possible.
Chrome leads over FF in my tests but it can still take 5-8 seconds for the browser to render ~300 records.
I considered lazy-loading such as that implemented in Google Reader but that goes against my other use cases, such as being able to get instantaneous search results (simple search being done on the client side over all the records we got in the JSON array) and multiple filters.
One thing I have noticed is that both FF and Chrome do not render anything until they loop over all items in the JSON array, even though I explicitly insert the newly created elements into DOM after every loop (as soon as I have the HTML). What I'd like to achieve would be just that: force the browser to render as soon as it can.
I tried deferring the calls (every item from the array would be processed by a deferred function) but ran into additional issues there as it seems that the order of execution isn't guaranteed anymore (some items further down the array would be processed before other items before it).
I'm looking for any hints and tips here.
try:
push rows into an array, then simply
el.innerHTML = array.join("");
use document fragments
var frag = document.createDocumentFragment();
for ( loop ) {
frag.appendChild( el );
}
parent.appendChild( frag );
If you don't need to display all 300 records at once you could try to paginate them 30 or 50 records at a time and only unroll the JSON array as those sub-parts are required to be displayed through a pager or a local search box. Once converted you could cache the content for subsequent display as users navigate up and down the pages.
Try creating the elements in a detached DOM node or a document fragment, then attaching the whole thing in one go.
300 isn't a lot.
I managed to create a tree of over 500 elements with data from JSON using jQuery, in a fraction of a second on Chrome.
300 isn't a big number.
If they are rendered so slowly, it might be due to a wrong way of doing it. Can you specify how you do it?
The slowest way would be to write HTML into a string in Javascript, then assign it with innerHtml member. But that would still be fast as hell for 300 rows.
Google Web Toolkit has BulkTableRenderers that are designed to render large tables quickly. Even if you choose not to use GWT, you might be able to pick up some techniques by looking through the source code which are available under Apache License, Version 2.0.
Related
My problem is that I've a large number of data (Over 50k), and I've to map it with DOM with the help of vanilla javascript (ES). Sometimes pages gets crashed while data is loading. What should I choose async/await or promises? Also which method would be better either XHR or Fetch method. Or I should use some third party library? That is big problem for me because sometimes it shows the data after an interval but sometimes pages is crashed. Can anyone explain here?
Firstly, I would look at the value gained out of really mapping 50K items to a single web page. Look at loading data on-emand in sets as users are scrolling through items - or some filtering mechanism to apply to the data before loading it.
If you really have to load so much data into a page - then look at loading it in chunks - or to optimize your code so that less strain is put on the browser.
On page load, I have my DataTable results available which I need to pass back to the javascript for processing.
What are my options?
Use a hidden field to pass the data back up. not sure how exactly, by maybe convert it to xml/json and then access it from javascript that way. Seems like a pain. No extra round trips for this approach.
Use webmethod/webservice to issue a call directly from the javascipt and then get back the DataTable, however this requires an extra round trip since I already have the DataTable available on pageload.
It is also possible to access objects in code behind using ASP.NET inline expressions (i.e. <% syntax) usable from the .aspx page. No extra round trips for this approach.
Convert DataTable to json/xml and then use ASP's ClientScript.RegisterStartupScript to make it available in there as a string returned from a function or something. Sounds hacky though.
Bind the DataTable/DataSet (or any object that implements the IEnumerable interface) to an ASP data control such as a DataGrid, DataList, Repeater, etc... and then just hide the control via some CSS: #datacontrol {display: none;}
How can I do this?
If it is just array, refer this.
If it is custom object, IMO it is better to call server method via. ajax requests and load them into javascript objects and work on data.
There are a number of ways you could do it as you have already reasoned through. Any time I have had a need to do this I have written my data out to a javascript variable using the clientscript.registerclientscriptblock method and then used javascript in the page to access the variable and do the rest of the parsing.
See this for example
You would just use the string builder or whatever you chose to dim a javascript array containing your table values and pass that to the page.
I had this same requirement in one of my asp.net application. What I did was, I created a string variable from the DataTable on Page_Load() by using custom separators like :: ; etc for row-separator and column separator. And then set the string to a hidden field or textbox text with CSS {display: none;}
After that, you can get the value of the hidden-field or TextBox in javascript $(document).ready() block or javascript pageLoad function() if you want it to be executed on every full/partial postback.
Get the value and decode it as you had coded the DataTable into the string in code-behind, and process it as u need.
Hope it helps :)
After trying several of the above options, I've found #3 (ASP.NET inline expressions) to be the best choice for accessing an ADO query result such as a DataTable or DataSet since it was the quickest to implement and had no additonal round trips due to the fact the inline expressions are resolved during the page construction.
I tried #5 (bind data to data control, then hide it), but it was noticeably slower and I had problems finding a control that would expose all the features of a DataTable/DataSet. There are a handful of data controls that you can bind records to, but what I found was that much of the "nice-ities" that come with a DataSet/DataTable are lost when converting to a repeater control or such. If you use one of the more full featured ASP controls available to get more of those features back, you lose on performance since those controls are meant for read/write and to render the content for display. And it wasn't as simple as I expected to access the data as it is in the code behind.
I thought about #4 (passing data thru ASP's ClientScript.RegisterStartupScript), but didn't feel like serializing/deserializing every object I need to expose and working though any hiccups that come with it. It just didn't seem like the right way. I suppose it would be fine for simpler objects though.
And #1 (serialize data to a hidden field) is pretty much the same concept as the above #4 (passing data thru ASP's ClientScript.RegisterStartupScript so I didn't bother with that one either.
The other (2nd best) possibility is to do #2 (using webmethod/webservice) as #Sundeep and #ron_tornambe have pointed out. However, this option adds an extra round trip to the page request and since the above scenario has the DataTable/DataSet ready for consumption on page load it is not optimal for me. If it wasn't for that I'd say it was equal to my first choice of #3 (ASP.NET inline expressions) since you'll get the full featured object to work with.
I often use data-attributes to store configuration that I can't semantically markup so that the JS will behave in a certain way for those elements. Now this is fine for pages where the server renders them (dutifully filling out the data-attributes).
However, I've seen examples where the javascript writes data-attributes to save bits of data it may need later. For example, posting some data to the server. If it fails to send then storing the data in a data-attribute and providing a retry button. When the retry button is clicked it finds the appropriate data-attribute and tries again.
To me this feels dirty and expensive as I have to delve into the DOM to then dig this bit of data out, but it's also very easy for me to do.
I can see 2 alternative approaches:
One would be to either take advantage of the scoping of an anonymous Javascript function to keep a handle on the original bit of data, although this may not be possible and could perhaps lead to too much "magic".
Two, keep an object lying around that keeps a track of these things. Instead of asking the DOM for the contents of a certain data-attribute I just query my object.
I guess my assumptions are that the DOM should not be used to store arbitrary bits of state, and instead we should use simpler objects that have a single purpose. On top of that I assume that accessing the DOM is more expensive than a simpler, but specific object to keep track of things.
What do other people think with regards to, performance, clarity and ease of execution?
Your assumptions are very good! Although it's allowed and perfectly valid, it's not a good practice to store data in the DOM. Sure, it's fine if you only have one input field, but, but as the application grows, you end up with a jumbled mess of data everywhere...and as you mentioned, the DOM is SLOW.
The bigger the app, the more essential it is to separate your interests:
DOM Events -> trigger JS functions -> access Data (JS object, JS API, or AJAX API) -> process results (API call or DOM Change)
I'm a big fan of creating an API to access JS data, so you can also trigger new events upon add, delete, get, change.
The JStree plugin for jQuery allows you to load data to feed a navigable tree GUI element provided by the library.
For small trees, just load them all in memory, and you're done. But for large trees, that might not be a good solution. The all-in-memory approach doesn't scale.
Think 6000 nodes (or 60,000), most of which will never be relevant to the user viewing the page. Wouldn't it then be better to load only the first level of branches and incrementally load more branches following the user's clicks along what's already displayed? It certainly would.
You'd mark the tree where it has missing branches, then you'd load the missing branches on demand, remove the mark from the tree, graft the branch onto the tree, and proceed in that manner recursively if necessary.
How do you do imcremental loading? I found a question from 2009 pertaining to the same problem, but the API appears to have changed. Does anyone have a recipe of how to proceed for the current version of the library?
Note incremental loading is not the same as incremental rendering, which is another optimization already provided by the library.
Lumi, that is how the plugin works.
Have a look at the Demo page, about half way down, at the section "PHP & mySQL demo + event order". The example uses the JSON format for transmitting data, which is the de facto standard, but the plugin also supports other formats. When you expand a parent node, an AJAX request is made to load the next level of nodes.
I seem to have some pretty large memory leaks in an app that I am working on. The app itself is not very complex. Every 15 seconds, the page requests approx 40kb of JSON from the server, and draws a table on the page using it. It is cheaper to draw the table over because the data is usually always new. I am attaching a few events to the table, approx 5 per line, 30 lines in the table. I used jQuery's .html() method to put the new html into the container and overwrite the existing. I do this specifically so that jQuery's special cleanup functions go in and attempt to detach all events on the elements in the element that it is overwriting. I then also delete the large variables of html once they are sent to the DOM using delete my_var.
I have checked for circular references and attached events that are never cleared a few times, but never REALLY dug into it. I was wondering if someone could give me a few pointers on how to optimize a very heavy app like this. I just picked up "High Performance Javascript" by Nicholas Zakas, but didn't have much time to get into it yet.
To give an idea on how much memory this is using, after 4~ hours, it is using about 420,000k on chrome, and much more on Firefox or IE.
Thanks!
I'd suggest writing a test version of your script without events. DOM / JS circular references might be very hard to spot. By eliminating some variables from the equation, you might be able to narrow down your search a bit.
I have experienced the same thing. I had a piece of code that polled every 10 seconds and retrieved a count of the current user's errors (data entry/auditing), a simple integer. This was then used to replace the text inside a div (so the user knew when new errors were found in their work). If left overnight, the browser would end up using well over 1gb of memory!
The closest I came to solving the issue was reducing the polling to occur every 2 minutes instead and insist the user closed down at the end of the day. A better solution still would have been to use Ajax Push Engine to push data to the page ONLY when an error was created. This would have resulted in data being sent less frequently and thus less memory being used.
Are you saving anything to an abject / array? I've had this happen before with a chrome plugin where an array just kept getting larger and larger. This sounds like it might be your problem, especially considering you're fetching 40k.
A snippet would be great it seems as if you are creating new variables each time and the old ones aren't going out of scope therefore not being garbage collected.
Also try and encapsulate more of your JS using constructors and instances of objects ect. When the JS is just a list of functions and all the variables have a global scope rather than being properties of an instance your JS can take up a lot memory.
Why not draw the table and attach events only once and just replace the table data every 15 second?
1) Jquery Ajax wrapper, called recurrently, leads to memory leaks and the community is aware of that (although the issue on the ajax wrapper is not by far as ugly as your case).
2) When it comes to optimization, you've done the first step(using lightweight json calls) and delete method, but that the problem is in the "event attaching" area and html method.
What I mean, is that:
1) you are probably reattaching the listener after each html() call
2) you re-draw the hole table on each ajax call.
This indeed leads to memory leaks.
You have to:
1)draw the table (with the first time content) on the server side
2)$(document).ready you attach listeners to the table's cells
3)call json service with ajax, parse response
4)refill table with the parsed array data
Tell us what you've achived meanwhile :)
I was having a similar problem just this week. It turned out I had a circular reference in my database. I had an inventory item ABC123 flagged as being replaced by XYZ321 and I also had XYZ321 flagged as being replced by ABC123. Sometimes the circular reference is not in the PHP code.