Knockout.js incredibly slow under semi-large datasets - javascript

I'm just getting started with Knockout.js (always wanted to try it out, but now I finally have an excuse!) - However, I'm running into some really bad performance problems when binding a table to a relatively small set of data (around 400 rows or so).
In my model, I have the following code:
this.projects = ko.observableArray( [] ); //Bind to empty array at startup
this.loadData = function (data) //Called when AJAX method returns
{
for(var i = 0; i < data.length; i++)
{
this.projects.push(new ResultRow(data[i])); //<-- Bottleneck!
}
};
The issue is the for loop above takes about 30 seconds or so with around 400 rows. However, if I change the code to:
this.loadData = function (data)
{
var testArray = []; //<-- Plain ol' Javascript array
for(var i = 0; i < data.length; i++)
{
testArray.push(new ResultRow(data[i]));
}
};
Then the for loop completes in the blink of an eye. In other words, the push method of Knockout's observableArray object is incredibly slow.
Here is my template:
<tbody data-bind="foreach: projects">
<tr>
<td data-bind="text: code"></td>
<td><a data-bind="projlink: key, text: projname"></td>
<td data-bind="text: request"></td>
<td data-bind="text: stage"></td>
<td data-bind="text: type"></td>
<td data-bind="text: launch"></td>
<td><a data-bind="mailto: ownerEmail, text: owner"></a></td>
</tr>
</tbody>
My Questions:
Is this the right way to bind my data (which comes from an AJAX method) to an observable collection?
I expect push is doing some heavy re-calc every time I call it, such as maybe rebuilding bound DOM objects. Is there a way to either delay this recalc, or perhaps push in all my items at once?
I can add more code if needed, but I'm pretty sure this is what's relevant. For the most part I was just following Knockout tutorials from the site.
UPDATE:
Per the advice below, I've updated my code:
this.loadData = function (data)
{
var mappedData = $.map(data, function (item) { return new ResultRow(item) });
this.projects(mappedData);
};
However, this.projects() still takes about 10 seconds for 400 rows. I do admit I'm not sure how fast this would be without Knockout (just adding rows through the DOM), but I have a feeling it would be much faster than 10 seconds.
UPDATE 2:
Per other advice below, I gave jQuery.tmpl a shot (which is natively supported by KnockOut), and this templating engine will draw around 400 rows in just over 3 seconds. This seems like the best approach, short of a solution that would dynamically load in more data as you scroll.

Please see: Knockout.js Performance Gotcha #2 - Manipulating observableArrays
A better pattern is to get a reference to our underlying array, push to it, then call .valueHasMutated(). Now, our subscribers will only receive one notification indicating that the array has changed.

As suggested in the comments.
Knockout has it's own native template engine associated with the (foreach, with) bindings. It also supports other template engines, namely jquery.tmpl. Read here for more details. I haven't done any benchmarking with different engines so don't know if it will help. Reading your previous comment, in IE7 you may struggle to get the performance that you are after.
As an aside, KO supports any js templating engine, if someone has written the adapter for it that is. You may want to try others out there as jquery tmpl is due to be replaced by JsRender.

Use pagination with KO in addition to using $.map.
I had the same problem with a large datasets of 1400 records until I used paging with knockout. Using $.map to load the records did make a huge difference but the DOM render time was still hideous. Then I tried using pagination and that made my dataset lighting fast as-well-as more user friendly. A page size of 50 made the dataset much less overwhelming and reduced the number of DOM elements dramatically.
Its very easy to do with KO:
http://jsfiddle.net/rniemeyer/5Xr2X/

KnockoutJS has some great tutorials, particularly the one about loading and saving data
In their case, they pull data using getJSON() which is extremely fast. From their example:
function TaskListViewModel() {
// ... leave the existing code unchanged ...
// Load initial state from server, convert it to Task instances, then populate self.tasks
$.getJSON("/tasks", function(allData) {
var mappedTasks = $.map(allData, function(item) { return new Task(item) });
self.tasks(mappedTasks);
});
}

Give KoGrid a look. It intelligently manages your row rendering so that it's more performant.
If you you're trying to bind 400 rows to a table using a foreach binding, you're going to have trouble pushing that much through KO into the DOM.
KO does some very interesting things using the foreach binding, most of which are very good operations, but they do start to break down on perf as the size of your array grows.
I've been down the long dark road of trying to bind large data-sets to tables/grids, and you end up needing to break apart/page the data locally.
KoGrid does this all. Its been built to only render the rows that the viewer can see on the page, and then virtualize the other rows until they are needed. I think you'll find its perf on 400 items to be much better than you're experiencing.

A solution to avoid locking up the browser when rendering a very large array is to 'throttle' the array such that only a few elements get added at a time, with a sleep in between. Here's a function which will do just that:
function throttledArray(getData) {
var showingDataO = ko.observableArray(),
showingData = [],
sourceData = [];
ko.computed(function () {
var data = getData();
if ( Math.abs(sourceData.length - data.length) / sourceData.length > 0.5 ) {
showingData = [];
sourceData = data;
(function load() {
if ( data == sourceData && showingData.length != data.length ) {
showingData = showingData.concat( data.slice(showingData.length, showingData.length + 20) );
showingDataO(showingData);
setTimeout(load, 500);
}
})();
} else {
showingDataO(showingData = sourceData = data);
}
});
return showingDataO;
}
Depending on your use case, this could result in massive UX improvement, as the user might only see the first batch of rows before having to scroll.

Taking advantage of push() accepting variable arguments gave the best performance in my case.
1300 rows were loading for 5973ms (~ 6 sec.). With this optimization the load time was down to 914ms (< 1 sec.)
That's 84.7 % improvement!
More info at Pushing items to an observableArray
this.projects = ko.observableArray( [] ); //Bind to empty array at startup
this.loadData = function (data) //Called when AJAX method returns
{
var arrMappedData = ko.utils.arrayMap(data, function (item) {
return new ResultRow(item);
});
//take advantage of push accepting variable arguments
this.projects.push.apply(this.projects, arrMappedData);
};

I been dealing with such huge volumes of data coming in for me valueHasMutated worked like a charm .
View Model :
this.projects([]); //make observableArray empty --(1)
var mutatedArray = this.projects(); -- (2)
this.loadData = function (data) //Called when AJAX method returns
{
ko.utils.arrayForEach(data,function(item){
mutatedArray.push(new ResultRow(item)); -- (3) // push to the array(normal array)
});
};
this.projects.valueHasMutated(); -- (4)
After calling (4) array data will be loaded into required observableArray which is this.projects automatically .
if you got time have a look at this and just in-case any trouble let me know
Trick here : By doing like this , if in case of any dependencies (computed,subscribes etc) can be avoided at push level and we can make them execute at one go after calling (4).

A possible work-around, in combination with using jQuery.tmpl, is to push items on at a time to the observable array in an asynchronous manner, using setTimeout;
var self = this,
remaining = data.length;
add(); // Start adding items
function add() {
self.projects.push(data[data.length - remaining]);
remaining -= 1;
if (remaining > 0) {
setTimeout(add, 10); // Schedule adding any remaining items
}
}
This way, when you only add a single item at a time, the browser / knockout.js can take its time to manipulate the DOM accordingly, without the browser being completely blocked for several seconds, so that the user may scroll the list simultaneously.

I've been experimenting with performance, and have two contributions that I hope might be useful.
My experiments focus on the DOM manipulation time. So before going into this, it is definitely worth following the points above about pushing into a JS array before creating an observable array, etc.
But if DOM manipulation time is still getting in your way, then this might help:
1: A pattern to wrap a loading spinner around the slow render, then hide it using afterRender
http://jsfiddle.net/HBYyL/1/
This isn't really a fix for the performance problem, but shows that a delay is probably inevitable if you loop over thousands of items and it uses a pattern where you can ensure you have a loading spinner appear before the long KO operation, then hide it afterwards. So it improves the UX, at least.
Ensure you can load a spinner:
// Show the spinner immediately...
$("#spinner").show();
// ... by using a timeout around the operation that causes the slow render.
window.setTimeout(function() {
ko.applyBindings(vm)
}, 1)
Hide the spinner:
<div data-bind="template: {afterRender: hide}">
which triggers:
hide = function() {
$("#spinner").hide()
}
2: Using the html binding as a hack
I remembered an old technique back from when I was working on a set top box with Opera, building UI using DOM manipulation. It was appalling slow, so the solution was to store large chunks of HTML as strings, and load the strings by setting the innerHTML property.
Something similar can be achieved by using the html binding and a computed that derives the HTML for the table as a big chunk of text, then applies it in one go. This does fix the performance problem, but the massive downside is that it severely limits what you can do with binding inside each table row.
Here's a fiddle that shows this approach, together with a function that can be called from inside the table rows to delete an item in a vaguely-KO-like way. Obviously this isn't as good as proper KO, but if you really need blazing(ish) performance, this is a possible workaround.
http://jsfiddle.net/9ZF3g/5/

If using IE, try closing the dev tools.
Having the developer tools open in IE significantly slows this operation down. I'm adding ~1000 elements to an array. When having the dev tools open, this takes around 10 seconds and IE freezes over while it is happening. When i close the dev tools, the operation is instant and i see no slow down in IE.

I also noticed that Knockout js template engine works slower in IE, I replaced it with underscore.js, works way faster.

Related

probably another closure-loop issue

I am importing a set of notes into my webpage, this is to read a JSON file locally in a loop and append the read data into the main div. No problem till now. But then I'm producing a ckeditor instance beside each note for the client to become able to easily add comments to his note of interest. The comments are initially generated as several indexed empty div's in another HTML file, loaded into the ckeditor instances. However, all these happen in a really large for loop (I have almost 6000 notes to be loaded in a segmented manner using if conditions), and so now I'm engaged with the classic closure-loop problem. Have read several previous questions and answers foo this and other websites and tested a number of them to get rid of the closure-loop problem, but no success so far.
The related segment of my java script has the structure:
var q;
$.when(
$.ajax( ... loads the json file that contains the notes and set q=$.parseJSON(data) on success)
).then(function() {
for(var i in q) {
if(i is in a specific range){
... several lines of code for properly importing the notes ...
... and generating a place for the comments to appear as:
... +'<div id="CKEditor'+i+'" contenteditable="true" placeholder="Put your comment here!"></div>'
... which is appended to the main div of the webpage
... Now the main problematic part begins:
$('#temporary').empty(); // a hidden div defined somewhere in the page
var func = (function() {
var ilocal=i, tmp;
return function() {
tmp=document.getElementById('temporary').innerHTML;
alert(tmp);
CKEDITOR.instances['CKEditor'+ilocal].setData(tmp);
}
})();
$.when(
$('#temporary').load("NewComments.htm #verse-"+i)
).then(func);
};
};
CKEDITOR.disableAutoInline = true;
CKEDITOR.inlineAll();
})
maybe the problem is not for the loop but for the nested $.when().then(), any suggestion to resolve the issue?
The problem is that there is only a single $('#temporary') div in your page, which will be re-used and overwritten by every iteration. In particular, in your callback
document.getElementById('temporary').innerHTML;
…
CKEDITOR.instances['CKEditor'+ilocal]
the ilocal (and tmp) variables are indeed local to the IIFE and that particular iteration, but document.getElementById is global. It will return the same element every time.
A quick fix is to create a new element for every request, and assign it to tmp during the iteration (like you assign i to ilocal) instead of when the func is called.
A much better practice however would be not to use $('#temporary').load("NewComments.htm #verse-"+i) multiple times, and instead load the NewComments.htm only once per Ajax and process the result as you need.

Extjs - Best way to iterate through displayed records in a buffered store

So, I'm upgrading from EXT 4.1.1a to 4.2.2 and have come across a problem with buffered stores. In 4.1.1a I could use store.each to iterate through the currently displayed store records, but in 4.2.2 I simply get the error:
TypeError: Cannot read property 'length' of undefined
Basically inside the store object the data property does not have an items property anymore, and the each method uses the length property of the items, hence the error. Instead the items in the store seem to reside in data.map. I could loop through data.map but it seems there should be a better way. The docs only mention store.each as the way to do this even though this seems to fail for buffered stores.
I'm iterating through the store on the refresh listener attached to the grids view.
Any help with this would be much appreciated
Apparently they think you can't iterate over the store because it has "sparse" data, but that is not true. Currently, what you could do is the following.
if(store.buffered) {
// forEach is private and part of the private PageMap
store.data.forEach(function(record, recordIdx) {
/* Do stuff with the record here */
}, this);
} else {
store.each(function(record) {
/* Do the same stuff I guess */
}, this);
}
IMPORTANT
Take care that can change the structure of the store in the future which will surely brake your code.
Additionally, I strongly believe that if proper design patterns were used, each had to take care of the looping without caring about the structure.
OPTIMIZATION
What I usually do, when I initialize the store is the following:
if(store.buffered) {
store.iterate = store.data.forEach;
} else {
store.iterate = store.each;
}
Then you could just use it like this:
store.iterate(fn, scope);
This is not the best decision but simplifies writing a lot of if-statements

knockout view model to represent a single object

Edit: This answer here seems to have provided the solution; because I am a lazy sod and was trying to avoid having to define my model in two places (once on server, once on client) I figured there had to be a way. By using the custom binding in the linked solution, I'm able to have the observables created from the various form element data-bind attributes, so basically it builds the model from the form. So it's effectively driving the model definition from the form. I haven't decided yet if this is a bad idea :)
I'm wondering what I'm doing wrong (or indeed, if I even am doing anything wrong). I need to create a form to edit a single record at a time, which has just got some simple text/number properties:
{ItemCode:ABCD,LotNumber:1234,ID:4885,MeasuredValue1:90}
I decided to use ko with the mapping plugin to do it. I'm fairly new to ko.
Anyway I ended up with a view model like this:
var LotModel = function() {
var self = this;
self.Update = function(itemcode,lotnumber) {
var data = { ItemCode: itemcode, LotNumber: lotnumber }
//DoAjax is just a utility function and is working fine.
DoAjax("POST", "LotHistory.aspx/GetLotHistory", data,
function(msg) {
ko.mapping.fromJS(msg.d, null, self);
ko.applyBindings(self);
},
function(xhr, ajaxOptions, thrownError) {
AjaxFailure(xhr, ajaxOptions, thrownError);
}
);
}
}
And later on my code,
var lm = new LotModel();
and finally in $(document).ready
ko.applyBindings(lm);
Now it works, except that if you see in the view model, every time I load data I have to re-call ko.applyBindings(self) in the vm's Update function.
If I comment out that line, it doens't bind. I think that this is because I'm only binding a single object (i.e the view model itself is the object after the ko mapping plugin does its work) but everywhere I read about ko it seems to say "you only need to call this once, ever."
So I can't help feeling I am missing something really obvious; commenting out ko.applyBindings(lm) in the document ready function doesn't make any difference because I automatically call lm.Update in document.ready but commenting it out in the viewmodel breaks it.
So my question is - am I doing this the wrong way? Is it overkill for just a single object at a time type binding? I mean it doesn't bother me too much, it works as I want it to but still, it's nagging at me...
It's indeed best not to reapply bindings many times if avoidable. The problem is that you don't have any observable properties in your viewmodel to begin with. An initial call to ko.mapping.fromJS can fix this (or you can manually add the observables) e.g.:
ko.mapping.fromJS({
ItemCode: '', LotNumber: 0, ID: 0, MeasuredValue1: 0
}, null, self);
See fiddle for a working example: http://jsfiddle.net/antishok/qpwqH/1/

Loading data with dependentObservable causing an infinite loop

I'm playing around with Knockout and now trying to use the Knockout address plugin (based on jQuery address).
This code below works, except that when I try entering the address the linkObservableToUrl provides the page is loaded without the right tags. I guess something is wrong in the way I'm loading the messages, but I'm not sure how this should be done using the Knockout framework.
I've got the following code, which is causing an infinite loop:
var viewModel = {
page: ko.observable(1),
//messages: ko.observableArray([]),
tags: ko.observable()
};
viewModel.filterTags = function (filterTags) {
viewModel.tags(filterTags);
};
viewModel.messages = ko.dependentObservable(function () {
$.ajax(
// abbreviated
data: ko.toJSON(viewModel),
// abbreviated
)}, viewModel);
ko.applyBindings(viewModel);
ko.linkObservableToUrl(viewModel.tags, "tags", null);
How can I solve this and still have the messages depend on page and tags?
Switch to AngularJS. Angular's databinding is much better than Knockout's. Much of the problems you are encountering here with infinite loops, etc. are due to Knockout's need for observable wrappers.
Angular does not require observable wrappers of your objects. Angular can observe any standard JSON/Javascript object, and databind directly to any HTML element via MVVM.
In Angular, you would simply make your AJAX call for ViewModel.messages, and the standard JSON would be applied to your ViewModel.messages property. No observable wrappers. This eliminates the need for ko.dependentObservable() and thus - removes your infinite loop.
http://www.angularjs.org
In the second example (which is quit long for a code snippet) you have this:
viewModel.messages = ko.dependentObservable(function () {
...
data: ko.toJSON(viewModel),
...
If the call to ko.toJSON tries to get the value of all the observable properties on the view model, it will try to evaluate the viewModel.messages property. That will call ko.toJSON again, leading to an infinite loop.

dojo.data.objectStore.deleteItem

I have a dojo.store.Memory wrapped in a dojo.data.ObjectStore which I am then plugging into a dataGrid. I want to delete an item from the store and have the grid update. I have tried every combonation I can think of with no success. For example:
var combinedStore = new dojo.data.ObjectStore({objectStore: new dojo.store.Memory({data: combinedItems})});
combinedStore.fetch({query:{id: 'itemId'}, onComplete: function (items) {
var item = items[0];
combinedStore.deleteItem(item);
combinedGrid.setStore(combinedStore);
}});
combinedGrid.setStructure(gridLayout);
This throws no errors but combinedStore.objectStore.data still has the item that was meant to be deleted and the grid still displays the item. (The also seems to be a complete mismatch between combinedStore.objectStore.data and combinedStore.objectStore.index);
There's a simple solution, luckily! The delete is successfully happening, however, you need to save the ObjectStore after the deletion for it to be committed.
Change your code to look like this:
onComplete: function (items) {
var item = items[0];
combinedStore.deleteItem(item);
combinedStore.save();
combinedGrid.setStore(combinedStore);
}
That little save should do the trick. (Please note: the save must occur after the deleteItem - if you put it outside the fetch block, do to being asynchronous, it will actually happen before the onComplete!)
Working example: http://pastehtml.com/view/b34z5j2bc.html (Check your console for results.)
This does seem rather poorly documented at present in the new dojo.store documentation.
The old dojo.data.api.Write documentation make it fairly clear. An excerpt from http://dojotoolkit.org/reference-guide/dojo/data/api/Write.html:
Datastores that implement the Write interface act as a two-phase
intermediary between the client and the ultimate provider or service
that handles the data. This allows for the batching of operations,
such as creating a set of new items and then saving them all back to
the persistent store with one function call.
The save API is defined as asynchronous. This is because most
datastores will be talking to a server and not all I/O methods for
server communication can perform synchronous operations.
Datastores track all newItem, deleteItem, and setAttribute calls on
items so that the store can both save the items to the persistent
store in one chunk and have the ability to revert out all the current
changes and return to a pristine (unmodified) data set.
Revert should only revert the store items on the client side back to
the point the last save was called.
dojo.store has evolved from dojo.data and seems to follow many of its behavioral aspects.
The new dojo.store documentation http://www.sitepen.com/blog/2011/02/15/dojo-object-stores/ and http://www.sitepen.com/blog/2011/02/15/dojo-object-stores/ manages to talk specifically about the delete operation without mentioning having to call save() (in fact I can't find the word 'save' on that page at all).
I'm staying away from dojo.store as long as possible, hopefully it will be easier to follow in 1.7 or later, whenever I'm forced to use it for real :)

Categories

Resources