I have a big data should be shown in a Table.
I use javascript to fill the table instead of priting in HTML.
Here is a sample data I use:
var aUsersData = [[1, "John Smith", "...."],[...],.......];
the problem is that Firefox warns me that "There is a heavy script running, should i continue or stop?"
I don't want my visitors see the warning. how can I make performance better? jQuery? pure script? or another library you suggest?
you can use the method here to show a progress bar and not have the browser lock up on you.
http://www.kryogenix.org/days/2009/07/03/not-blocking-the-ui-in-tight-javascript-loops
I am using almost that method on this page:
http://www.bacontea.com/bb/
to get the browser not to hang and show feedback while loading.
jQuery doesn't usually make things faster, just easier. I use jQuery to populate tables, but they're pretty small (at most 2 columns by 40 rows). How much data are you populating into the table? This could be the limiting factor.
If you post some of your table-populating code we can see if it's possible to improve performance in any way.
My suspicion is that it won't make much difference either way, although sometimes adding a layer of abstraction like jQuery can impact performance. Alternately, the jQuery team may have found an obscure, really fast way of doing something that you would have done in a more obvious, but slower, way if you weren't using it. It all depends.
Two suggestions that apply regardless:
First, since you're already relying on your users having JavaScript enabled, I'd use paging and possibly filtering as well. My suspicion is that it's building the table that takes the time. Users don't like to scroll through really long tables anyway, adding some paging and filtering features to the page to only show them the entries from the array they really want to see may help quite a lot.
Second, when building the table, the way you're do it can have a big impact on performance. It almost seems a bit counter-intuitive, but with most browsers, building up a big string and then setting the innerHTML property of a container is usually faster than using the DOM createElement function over and over to create each row and cell. The fastest overall tends to be to push strings onto an array (rather than repeated concatenation) and then join the array:
var markup, rowString;
markup = [];
markup.push("<table><tbody>");
for (index = 0, length = array.length; index < length; ++index) {
rowString = /* ...code here to build a row as a string... */;
markup.push(rowString);
}
markup.push("</tbody></table>");
document.getElementById('containerId').innerHTML = markup.join("");
(That's raw JavaScript/DOM; if you're already using jQuery and prefer, that last line can be rewritten $('#containerId').html(markup.join(""));)
This is faster than using createElement all over the place because it allows the browser to process the HTML by directly manipulating its internal structures, rather than responding to the DOM API methods layered on top of them. And the join thing helps because the strings are not constantly being reallocated, and the JavaScript engine can optimize the join operation at the end.
Naturally, if you can use a pre-built, pre-tested, and pre-optimised grid, you may want to use that instead -- as it can provide the paging, filters, and fast-building required if it's a good one.
You can try a JS templating engine like PURE (I'm the main contributor)
It is very fast on all browsers, and keeps the HTML totally clean and separated from the JS logic.
If you prefer the <%...%> type of syntax, there are plenty of other JS template engines available.
Related
I am currently working on a project that requires me to iterate through a list of values and add a new value in between each value already in the list. This is going to be happening for every iteration so the list will grow exponentially. I decided that implementing the list as a Linked List would be a great idea. Now, JS has no default Linked List data structure, and I have no problem creating one.
But my question is, would it be worth it to create a simple Linked List from scratch, or would it be a better idea to just create an array and use splice() to insert each element? Would it, in fact, be less efficient due to the overhead?
Use a linked list, in fact most custom implementations done well in user javascript will beat built-in implementations due to spec complexity and decent JITting. For example see https://github.com/petkaantonov/deque
What george said is literally 100% false on every point, unless you take a time machine to 10 years ago.
As for implementation, do not create external linked list that contains values but make the values naturally linked list nodes. You will otherwise use way too much memory.
Inserting each element with splice() would be slower indeed (inserting n elements takes O(n²) time). But simply building a new array (appending new values and appending the values from the old one in lockstep) and throwing away the old one takes linear time, and most likely has better constant factors than manipulating a linked list. It might even take less memory (linked list nodes can have surprisingly large space overhead, especially if they aren't intrusive).
Javascript is an interpreted language. If you want to implement a linked list then you will be looping a lot! The interpreter will perform vely slowly. The built-in functions provided by the intrepreter are optimized and compiled with the interpreter so they will run faster. I would choose to slice the array and then concatenate everything again, it should be faster then implementing your own data structure.
As well javascript passes by value not by pointer/reference so how are you going to implement a linked list?
This is for a single page, mobile web-app....
For readability I've been concatenating my html, then injecting. I'm pretty certain there's a more efficient way, and would like to get a js expert's opinion!
heres an example of one of my concatenated html strings...
var html_str = '';
$.each(events_array, function(k, ev_type){
if( localStorage.getItem('show_type'+ev_type.type_num) !== 'false' ){
$.each(ev_type, function(k2, e){
if(typeof e != 'string'){
if(fav_mode && last_date_num != e.date){
html_str += '<li class="date">'+e.date_text+'</li>';
last_date_num = e.date;
}
html_str += '<li';
if(fav_mode | (FAVOURITES && $.inArray(parseInt(e.event_id), FAVOURITES) >= 0) ){
html_str += ' class="fav"';
}
html_str += '>';
html_str += '<div class="l_'+e.gig_club+'"></div>';
html_str += '<p rel="'+e.event_id+'"><span>'+e.venue+' : </span>'+e.nameofnight+'</p>';
html_str += '</li>';
}
});
}
});
return html_str
There is no "Fastest". There is only "Fastest" for a browser.
There are 3 common techniques. HTML string manipulation, templating and DOM manipulation.
Because templating can use both HTML string manipulation and the DOM internally I would recommend it for readability / maintainability.
Here are a few benchmarks
Templating
More templating
Templating with data for mobile platforms
Loads of templates
Dust js benchmark
If you're talking huge amounts of HTML and perf is paramount
Definitely Don't:
Inject dom nodes iteratively in a loop. This can mess with perf even in not-gigantic HTML scenarios.
Try to save work by using live existing HTML by tweaking attributes and swapping out content across a large variety of elements. As with the previous point this can get ugly even in things not involving thousands of elements. Large tables that don't have table-layout set to fixed can get particularly nasty when you trigger a ton of reflow.
Use jQuery's direct string to dom building - which, IMO, is actually excellent for most uses since it does a great job of validating for nasty escape sequences if injecting data from sources that may one day not be that secure. Use the html method to build from and inject from strings instead if you're going with strings. Although really for max perf, I'd probably just drop JQ if I 100% trusted my data.
Do:
Build it all out in a document fragment and append with that. Or build an HTML string and drop it all in at once. I prefer the hybrid method described below under "Maybe" but haven't tested in 2-3 years. That way the browser only has to repaint the page once. Google "CSS reflow" for more on that.
If you have to make lots of tweaks to a large volume of HTML, just change data and rebuild the whole set for same reason as above.
If you build from strings, minimize concatenation with the '+' operator and use array joins instead. Templating with arrays works great in a pinch.
Worry about the loops vs iterators that take a function argument if IE<=8 is a concern. Repeated function calls can get expensive without a JIT compiler. They can actually help you in JITs if they're inline (defined inside another function without any references returned outside).
Test everything you try. There's gray areas in everything but the multiple vs. one giant append rule. Multiple will always be slower.
Maybe:
An excellent trick in legacy browsers was to innerHTML large strings to document fragments and append those via DOM methods. This wasn't the fastest for all browsers but it was the best approach across the board back when IE7 mattered and the difference in modern JIT browsers between one-block innerHTML, DOM methods only into document fragment, and the hybrid approach were mostly negligible.
I would totally recommend templating too.
But I think you already make use of best practices about injecting HTML: it's far better to build the HTML, then inject it at one time, rather than injecting many times small bits of HTML, as the browser may repaint/reflow the document on each injection.
An explicit for loop will definitely be much faster than $.each(), mainly since that executes a function call for each element, but also for other reasons, e.g. with the new execution frame the lookup time for html_str will be longer.
There is some empirical evidence to suggest (I think this was valid with older browsers, I'm not sure what is faster nowadays or on mobile devices, it's worth checking out) that adding the elements to an array (with the loop variable html_str[i], and not html_str.push()) and then calling .join is faster than string concatenation.
As has been mentioned, adding one large DOM string is faster than small appends, and much faster than using DOM methods (appendChild, insertBefore, etc.).
A good templating engine would do these things for you (at a small extra cost), although I'm not sure if many of them do. And if it's only a small amount of "templating" then it might be overkill to use a library, when a simple loop does the trick.
You might as well consider using documentFragment, though it may not as readable as html string, it is very much effective (performance-wise) and readable maybe in object-oriented way.
you can visit this page for details: http://ejohn.org/blog/dom-documentfragments/
What are some of the better solutions to handling large datasets (100K) on the client with JavaScript. In particular, if you have multi-column sort and search capabilities, how do you handle fetching (and pre-fetching) the data, client side model binding (for display), and caching the data.
I would imagine a good solution would be doing some thoughtful work in the background. For instance, initially, if the table was displaying N items, it might fetch 2N items, return the data for the user, and then go fetch the next 2N items in the background (even if the user hasn't requested this). As the user made search/sort changes, it would throw out (or maybe even cache the initial base case), and do similar functionality.
Can you share the best solutions you have seen?
Thanks
Use a jQuery table plugin like DataTables: http://datatables.net/
It supports server-side processing for sorting, filtering, and paging. And it includes pipelining support to prefetch the next x pages of records: http://www.datatables.net/examples/server_side/pipeline.html
Actually the DataTables plugin works 4 different ways:
1. With an HTML table, so you could send down a bunch of HTML and then have all the sorting, filtering, and paging work client-side.
2. With a JavaScript array, so you could send down a 2D array and let it create the table from there.
3. Ajax source - which is not really applicable to you.
4. Server-side, where you send data in JSON format to an empty table and let DataTables take it from there.
SlickGrid does exactly what you're looking for. (Demo)
Using the AJAX data store, SlickGrid can handle millions of rows without flinching.
Since you tagged this with Ext JS, I'll point you to Ext.ux.LiveGrid if you haven't already seen it. The source is available, so you might have a look and see how they've addressed this issue. This is a popular and widely-used extension in the Ext JS world.
With that said, I personally think (virtually) loading that much data is useless as a user experience. Manually pulling a scrollbar around (jumping hundreds of records per pixel) is a far inferior experience to simply typing what you want. I'd much prefer some robust filtering/searching instead of presenting that much data to the user.
What if you went to Google and instead of a search box, it just loaded the entire internet into one long virtual list that you had to scroll through to find your site... :)
It depends on how the data will be used.
For a large dataset, where the browser's Find function was adequate, just returning a straight HTML table was effective. It takes a while to load, but the display is responsive on older, slower clients, and you never have to worry about it breaking.
When the client did the sorting and search, and you're not showing the entire table at once, I had the server send tab-delimited tables through XMLHTTPRequest, parsed them in the browser with list = String.split('\n'), and updated the display with repeated calls to $('node').innerHTML = 'blah'. The JS engine can store long strings pretty efficiently. That ran a lot faster on the client than showing, hiding, and rearranging DOM nodes. Creating and destroying new DOM nodes on the fly turned out to be really slow. Splitting each line into fields on-demand seems to work; I haven't experimented with that degree of freedom.
I've never tried the obvious pre-fetch & background trick, because these other methods worked well enough.
Check out this comprehensive list of data grids and
spreadsheets.
For filtering/sorting/pagination purposes you may be interested in great Handsontable, or DataTables as a free alternative.
If you need simply display huge list without any additional features Clusterize.js should be sufficient.
I have a simple piece of data that I'm storing on a server, as a plain string. It is kind of ridiculous, but it looks like this:
name|date|grade|description|name|date|grade|description|repeat for a long time
this string can be up to 1.4mb in size. The idea is that it's a bunch of student records, just strung together with a simple pipe delimeter. It's a very poor serialization method.
Once this massive string is pushed to the client, it is split along the pipes into student records again, using javascript.
I've been timing how long it takes to create, and split, these strings on the client side. The times are actually quite good, the slowest run I've seen on a few different machines is 0.2 seconds for 10,000 'student records', which has a final string size of ~1.4mb.
I realize this is quite bizarre, just wondering if there are any inherent problems with creating and splitting such large strings using javascript? I don't know how different browsers implement their javascript engines. I've tried this on the 'major' browsers, but don't know how this would perform on earlier versions of each.
Yeah looking for any comments on this, this is more for fun than anything else!
Thanks
String splitting for 1.4mb data is not a problem for decent machines, instead you should worry about the internet connection speed of your users. I've tried to do spell check with 800 kb dictionary (which is half of your data), main issue was loading time.
But looks like your students records data could be put in database, and might not need to load everything at loading time, So, how about do a pagination to show user records or use ajax to request to search certain user names?
If it's a really large string it may pay to continuously slice the string with 'string'.slice(from, to) to only process a smaller subset, appending all of the individual items to the end of the output with list.push() or something similar might work.
String split methods are probably the most efficient way of doing this though, even in IE. Processing individual characters using string.charAt(x) is extremely slow and will often show a security error as it stalls the browser. Using string split methods would certainly be much faster than splitting using regular expressions.
It may also be possible to encode the data using a JSON array, some newer browsers such as IE8/Webkit/FF3.5 have fast JSON parsing built in using JSON.parse(data). But using eval(JSON) may overflow the browser if there's enough data, so is probably a bad idea. It may pay to compare for performance though.
A much better approach in a lot of cases is to use AJAX and only load some of the data at once from the server, which would also save download time.
Besides S. Mark's excellent comments about local vs. x-fer speed and the tip to re-encode using AJAX, I suggest a (longterm) move away from JavaScript in the Browser (assuming that's were it runs) to either a non-browser implementation of JS (or possibly another language).
A browser based JS seems a week link in a data-x-fer chain and nothing I would want to run unmonitored, since the browsers are upgraded from time to time and breaking your JS-x-fer might be an unanticipates side effect!
I know that accessing and manipulating the DOM can be very costly, so I want to do this as efficiently as possible. My situation is that a certain div will always contain a list of items, however sometimes I want to refresh that list with a completely different set of items. In this case, I can build the new list and append it to that div, but I also need to clear out the out list. What's the best way? Set the innerHTML to the empty string? Iterate over the child nodes and call "removeChild"? Something else?
Have a look on QuirksMode. It may take some digging, but there are times for just this operation in various browsers. Although the tests were done more than a year ago, setting innerHTML to "" was the fastest in most browsers.
P.S. here is the page.
set innerHTML to an empty string.
Emphasis on the can be costly. It is not always costly; in fact when dealing with a small number of elements it can be trivial. Remember optimizations should always be done last, and only after you have demonstrable evidence that the specific aspect you want to improve is really your performance bottleneck.
I recommend avoiding mucking with innerHTML if possible; it's easy to mess up and do something nasty to the DOM that you can't recover from as gracefully.
This method is quite fast for 99.9% of cases, unless you are removing massive hierarchy chunks from the DOM:
while(ele.childNodes.length > 0) {
ele.removeChild(ele.childNodes[ele.childNodes.length - 1])
}
I'd recommend testing a couple implementations on the browsers you care about. Different Javascript engines have different performance characteristics and different DOM manipulation methods have different performance on different engines. Beware of premature optimization and beware of differences in browsers.