Which Method Do I Need To Choose For Fetching Large Data? - javascript

My problem is that I've a large number of data (Over 50k), and I've to map it with DOM with the help of vanilla javascript (ES). Sometimes pages gets crashed while data is loading. What should I choose async/await or promises? Also which method would be better either XHR or Fetch method. Or I should use some third party library? That is big problem for me because sometimes it shows the data after an interval but sometimes pages is crashed. Can anyone explain here?

Firstly, I would look at the value gained out of really mapping 50K items to a single web page. Look at loading data on-emand in sets as users are scrolling through items - or some filtering mechanism to apply to the data before loading it.
If you really have to load so much data into a page - then look at loading it in chunks - or to optimize your code so that less strain is put on the browser.

Related

Javascript Rule of thumb for delay length while using setTimeout() to allow a "loading" popup to appear

I'm using the setTimeout() function in javascript to allow a popup that says "loading" to be shown while I'm parsing some xml data. I found that at small enough delay values (below 10ms) it doesn't have time to show it before the browser freezes for a moment to do the actual work.
At 50ms, it has plenty of time, but I don't know how well this will translate to other systems. Is there some sort of "rule of thumb" that would dictate the amount of delay necessary to ensure a visual update without causing unnecessary delay?
Obviously, it'll depend on the machine on which the code is running etc., but I just wanted to know if there was anything out there that would give a little more insight than my guesswork.
The basic code structure is:
showLoadPopup();
var t = setTimeout(function()
{
parseXML(); // real work
hideLoadPopup();
}, delayTime);
Thanks!
UPDATE:
Turns out that parsing XML is not something that Web Workers can usually do since they don't have access to the DOM or the document etc. So, in order to accomplish this, I actually found a different article here on Stack Overflow about parsing XML inside a Web Worker. Check out the page here.
By serializing my XML object into a string, I can then pass it into the Web Worker through a message post, and then, using the JavaScript-only XML parser that I found in the aforementioned link, turn it back into an XML object within the Web Worker, do the parsing needed, and then pass back the desired text as a string without making the browser hang at all.
Ideally you would not ever have to parse something on the client side that actually causes the browser to hang. I would look into moving this to an ajax request that pulls part of the parsed xml (child nodes as JSON), or look at using Web Workers or a client side asynchronous option.
There appears to be no "rule-of-thumb" for this question simply because it was not the best solution for the problem. Using alternative methods to do the real meat of the work was the real solution, not using a setTimeout() call to allow for visual update to the page.
Given options were:
HTML 5's new Web Worker option (alternative information)
Using an AJAX request
Thanks for the advice, all.

JavaScript custom event handler strategy advice

I am in the middle of the design/development of a web store and am thinking my way through the best way of handling a transparent load of a couple of megabytes of product items. It seems the Asynchronous bit of AJAX doesn't mean parallel so I have to be a little bit creative here.
Rather than just pull a large lump of data down I was thinking of breaking it into pages of say 50->100 items and allowing the browser some time to process any internal messages.
The loader would pull down a page of data - fire a custom event to itself to get the next page. Theory is that if the browser has other messages to process this event would queue up behind them allowing the browser do anything else it has to do. A loss of a bit of speed - but a smoother user experience.
Rinse and repeat.
Add in some smoke and mirrors engineering - a loading icon or some such - to keep the user from noticing any delays and I should be right.
Before I dive into what is starting to sound like a fun bit of code can anyone think of a better way to pull down a large lump of data in as smooth and friendly a way as possible? I am an ancient old programmer - but JavaScript is a bit new to me.
Am I reinventing the wheel - AJAX already does all this - and I just don't know about it?
There are two ways to improve the situation:
a) reduce the data coming from the database - i.e. if there is some information, which is not used you don't need to load it. Also if there is non-changeable data you may cache it and request it only in the beginning once
b) load only the information which you need to show - that's the way which you thinking about, except the fact that you want to trigger new data loading automatically. Or at least that's what I understood. I'll suggest to keep the ajax requests as less as possible and make a new one only if the user needs more data. For example if the user stays on page 1 of 20, you don't need to fire loading of page 3 and 4. It's maybe good idea to load page 2, so the user could switch fast.

Avoiding content reload without use of frames/iframes

There are plenty of reasons to want to avoid <iframe>s (and indeed frames in general) but what are the best alternatives? (The intent here being to avoid full page reloads).
Facebook, for instance, seems to keep its top bar and side menu in tact (for the most part) and a full page reload incredibly rare.
Searching for explanations with little idea of what to use as search terms has rendered me little insight, so I thought it best to raise the question here. Is this all Ajax, or is there more to it than that?
AJAX
The more traditional approach is "AJAX". In a nutshell, your javascript code can request specific content from the server on a time (every x seconds) or when a user event happens (e.g. a button click).
A very basic implementation in jQuery would look something like:
function updateShouts(){
// Assuming we have #shoutbox
$('#shoutbox').load('latestShouts.php');
}
setInterval( "updateShouts()", 10000 );
This will update a div with id "shoutbox" every 10 seconds with whatever content is retrieved from latestShouts.php.
More advanced implementation would involve retrieving only data (not presentation) in a format like JSON or XML, and then updating the existing HTML values with the data that was received.
WebSockets
More recently, browsers have started supporting something called WebSockets. WebSockets allow you to keep a bidirectional connection open between the browser and the server, and it allows the server to push information to the browser without the browser requesting it.
This is more efficient in many ways; with the main reason being the fact that you don't have to waste server calls every x seconds to check if data is there. WebSockets allow you to display information from the server almost as soon as it becomes available.
I hope that helps..
Cheers!
Injecting partial content using ajax is your best and easiest bet - I recommend jquery too.

Save or destroy data/DOM elements? Which takes more resources?

I've been getting more and more into high-level application development with JavaScript/jQuery. I've been trying to learn more about the JavaScript language and dive into some of the more advanced features. I was just reading an article on memory leaks when i read this section of the article.
JavaScript is a garbage collected language, meaning that memory is allocated to objects upon their creation and reclaimed by the browser when there are no more references to them. While there is nothing wrong with JavaScript's garbage collection mechanism, it is at odds with the way some browsers handle the allocation and recovery of memory for DOM objects.
This got me thinking about some of my coding habits. For some time now I have been very focused on minimizing the number of requests I send to the server, which I feel is just a good practice. But I'm wondering if sometimes I don't go too far. I am very unaware of any kind of efficiency issues/bottlenecks that come with the JavaScript language.
Example
I recently built an impound management application for a towing company. I used the jQuery UI dialog widget and populated a datagrid with specific ticket data. Now, this sounds very simple at the surface... but their is a LOT of data being passed around here.
(and now for the question... drumroll please...)
I'm wondering what the pros/cons are for each of the following options.
1) Make only one request for a given ticket and store it permanently in the DOM. Simply showing/hiding the modal window, this means only one request is sent out per ticket.
2) Make a request every time a ticket is open and destroy it when it's closed.
My natural inclination was to store the tickets in the DOM - but i'm concerned that this will eventually start to hog a ton of memory if the application goes a long time without being reset (which it will be).
I'm really just looking for pros/cons for both of those two options (or something neat I haven't even heard of =P).
The solution here depends on the specifics of your problem, as the 'right' answer will vary based on length of time the page is left open, size of DOM elements, and request latency. Here are a few more things to consider:
Keep only the newest n items in the cache. This works well if you are only likely to redisplay items in a short period of time.
Store the data for each element instead of the DOM element, and reconstruct the DOM on each display.
Use HTML5 Storage to store the data instead of DOM or variable storage. This has the added advantage that data can be stored across page requests.
Any caching strategy will need to consider when to invalidate the cache and re-request updated data. Depending on your strategy, you will need to handle conflicts that result from multiple editors.
The best way is to get started using the simplest method, and add complexity to improve speed only where necessary.
The third path would be to store the data associated with a ticket in JS, and create and destroy DOM nodes as the modal window is summoned/dismissed (jQuery templates might be a natural solution here.)
That said, the primary reason you avoid network traffic seems to be user experience (the network is slower than RAM, always). But that experience might not actually be degraded by making a request every time, if it's something the user intuits involves loading data.
I would say number 2 would be best. Because that way if the ticket changes after you open it, that change will appear the second time the ticket is opened.
One important factor in the number of redraws/reflows that are triggered for DOM manipulation. It's much more efficient to build up your content changes and insert them in one go than do do it incrementally, since each increment causes a redraw/reflow.
See: http://www.youtube.com/watch?v=AKZ2fj8155I to better understand this.

Strategies for rendering HTML with Javascript

I take a fat JSON array from the server via an AJAX call, then process it and render HTML with Javascript. What I want is to make it as fast as humanly possible.
Chrome leads over FF in my tests but it can still take 5-8 seconds for the browser to render ~300 records.
I considered lazy-loading such as that implemented in Google Reader but that goes against my other use cases, such as being able to get instantaneous search results (simple search being done on the client side over all the records we got in the JSON array) and multiple filters.
One thing I have noticed is that both FF and Chrome do not render anything until they loop over all items in the JSON array, even though I explicitly insert the newly created elements into DOM after every loop (as soon as I have the HTML). What I'd like to achieve would be just that: force the browser to render as soon as it can.
I tried deferring the calls (every item from the array would be processed by a deferred function) but ran into additional issues there as it seems that the order of execution isn't guaranteed anymore (some items further down the array would be processed before other items before it).
I'm looking for any hints and tips here.
try:
push rows into an array, then simply
el.innerHTML = array.join("");
use document fragments
var frag = document.createDocumentFragment();
for ( loop ) {
frag.appendChild( el );
}
parent.appendChild( frag );
If you don't need to display all 300 records at once you could try to paginate them 30 or 50 records at a time and only unroll the JSON array as those sub-parts are required to be displayed through a pager or a local search box. Once converted you could cache the content for subsequent display as users navigate up and down the pages.
Try creating the elements in a detached DOM node or a document fragment, then attaching the whole thing in one go.
300 isn't a lot.
I managed to create a tree of over 500 elements with data from JSON using jQuery, in a fraction of a second on Chrome.
300 isn't a big number.
If they are rendered so slowly, it might be due to a wrong way of doing it. Can you specify how you do it?
The slowest way would be to write HTML into a string in Javascript, then assign it with innerHtml member. But that would still be fast as hell for 300 rows.
Google Web Toolkit has BulkTableRenderers that are designed to render large tables quickly. Even if you choose not to use GWT, you might be able to pick up some techniques by looking through the source code which are available under Apache License, Version 2.0.

Categories

Resources