Javascript Pouch DB doesn't load on Safari & Firefox - javascript

We have built a small script and a database, based on PouchDB in order to display all the products of one of our clients in a so called "product tree".
You can find the product tree here: http://www.bodyrevitaliser.nl/nl/service/product-tree/
As you can see the tree is loading properly only in Chrome. If you check the console in safari and Firefox the DB seems to be loaded as well but something seems to be blocking the tree itself to be loaded.
What are you thoughts? Any ideas what might be causing this and solutions.

The problem with your code is that your usage of promises is not correct. I strongly recommend you read this blog post: We have a problem with promises. I know it's long, but it's worthwhile to read the whole thing.
In particular, read the section called "WTF, how do I use forEach() with promises?", because that is exactly the mistake that you're making. You are doing a bunch of insertions inside of a $.each, and then you are immeditely doing an allDocs() inside the same function. So you have zero guarantees that any documents have actually been inserted into PouchDB by the time you try to read from PouchDB. Perhaps it will, perhaps it won't, but it all depends on subtle timing differences between different browsers, so you can't count on it.

Related

Cloning knockout observables with lodash.clonedeep

I needed some ideas regarding my current issue...
I'm currently improving a small but common part of a big software. This means I may not replace functionality without creating extreme testing overhead or even impacting functionality.
I'm trying to clone a data structure that besides normal data contains lots of knockout observables. Doing this I see serious performance differences between Chrome and IE.
It could be possible that there are cyclic references, it could also be possible, that knockout observables somehow do not like being cloned. Or perhaps the Internet
Explorer reached some kind of data size limit that can be extended. Whatever the reason is, Chrome somehow does better.
The problem gets intensified because of these reasons:
I need this huge data structure of several tens of thousands "beautified JSON"(!!) lines to simulate
real life and to create this big problem. This amount of data may not be diminished to a debugable extent because the smaller the data is, the less the timing
difference to find and finally it vanishes. To publish the data structure is also not possible for me because I'm not able to make such an amount of data anonymous. And with this
size of data it is not really possible to find data problems while debugging.
I do not think that Google is that superior to Microsoft, so I expect the problem to be data dependent or to be able to be eliminated by some browser settings.
Overall in the internet I can read, that lodash.clonedeep() would be slow or that the Internet Explorer would be slow. If so, this may not be my problem. In my opinion this does not explain the huge differences on Chrome an the Internet Explorer.
Such an answer is too simple for me and I need some improvement because most of our customers use the Internet Explorer and this is impossible to change.
I did run the following lines in Chrome and Internet Explorer.
console.time('jquery.extend()');
dataCopy = $.extend(true, {}, viewModel.api.getData());
console.timeEnd('jquery.extend()');
console.time('lodash.clonedeep()');
dataCopy = _.cloneDeep(viewModel.api.getData());
console.timeEnd('lodash.clonedeep()');
console.time('JSON.parse(JSON.stringify())');
dataCopy = JSON.parse(JSON.stringify(viewModel.api.getData()));
console.timeEnd('JSON.parse(JSON.stringify())');
The output is as follows
Chrome Version 62.0.3202.94:
jquery.extend(): 1159.470947265625ms
lodash.clonedeep(): 2783.241943359375ms
JSON.parse(JSON.stringify()): 1139.403076171875ms
Internet Explorer Version 11.0.9600.18837:
jquery.extend(): 10.802,27ms
lodash.clonedeep(): 28.302,65ms
JSON.parse(JSON.stringify()): 10.479,681ms
Does anybody have an idea regarding
why this huge difference exists?
a way to accelerate IE (almost) to the speed of Chrome in this context?
a hint what to look at? / to find out what the problem is?
a way to find out what is going on in lodash.clonedeep() with such huge data? (Something more efficient than breakpoints.)
possible debug output messages from anywhere that could help?
is there a known impact of knockout observables to lodash.clonedeep() on IE?
whether knockout observables somehow do not like to be cloned?
whether cyclic structures are a problem for cloning at all? (If so, why is Chrome that faster?)
I'm searching for resolutions as well as for best practices.
Thanks to all of you!

Javascript Rule of thumb for delay length while using setTimeout() to allow a "loading" popup to appear

I'm using the setTimeout() function in javascript to allow a popup that says "loading" to be shown while I'm parsing some xml data. I found that at small enough delay values (below 10ms) it doesn't have time to show it before the browser freezes for a moment to do the actual work.
At 50ms, it has plenty of time, but I don't know how well this will translate to other systems. Is there some sort of "rule of thumb" that would dictate the amount of delay necessary to ensure a visual update without causing unnecessary delay?
Obviously, it'll depend on the machine on which the code is running etc., but I just wanted to know if there was anything out there that would give a little more insight than my guesswork.
The basic code structure is:
showLoadPopup();
var t = setTimeout(function()
{
parseXML(); // real work
hideLoadPopup();
}, delayTime);
Thanks!
UPDATE:
Turns out that parsing XML is not something that Web Workers can usually do since they don't have access to the DOM or the document etc. So, in order to accomplish this, I actually found a different article here on Stack Overflow about parsing XML inside a Web Worker. Check out the page here.
By serializing my XML object into a string, I can then pass it into the Web Worker through a message post, and then, using the JavaScript-only XML parser that I found in the aforementioned link, turn it back into an XML object within the Web Worker, do the parsing needed, and then pass back the desired text as a string without making the browser hang at all.
Ideally you would not ever have to parse something on the client side that actually causes the browser to hang. I would look into moving this to an ajax request that pulls part of the parsed xml (child nodes as JSON), or look at using Web Workers or a client side asynchronous option.
There appears to be no "rule-of-thumb" for this question simply because it was not the best solution for the problem. Using alternative methods to do the real meat of the work was the real solution, not using a setTimeout() call to allow for visual update to the page.
Given options were:
HTML 5's new Web Worker option (alternative information)
Using an AJAX request
Thanks for the advice, all.

But why's the browser DOM still so slow after 10 years of effort?

The web browser DOM has been around since the late '90s, but it remains one of the largest constraints in performance/speed.
We have some of the world's most brilliant minds from Google, Mozilla, Microsoft, Opera, W3C, and various other organizations working on web technologies for all of us, so obviously this isn't a simple "Oh, we didn't optimize it" issue.
My question is if i were to work on the the part of a web browser that deals specifically with this, why would I have such a hard time making it run faster?
My question is not asking what makes it slow, it's asking why hasn't it become faster?
This seems to be against the grain of what's going on elsewhere, such as JS engines with performance near that of C++ code.
Example of quick script:
for (var i=0;i<=10000;i++){
someString = "foo";
}
Example of slow because of DOM:
for (var i=0;i<=10000;i++){
element.innerHTML = "foo";
}
Some details as per request:
After bench marking, it looks like it's not an unsolvable slow issue, but often the wrong tool is used, and the tool used depends on what you're doing cross-browser.
It looks like the DOM efficiency varies greatly between browsers, but my original presumption that the dom is slow and unsolvable seems to be wrong.
I ran tests against Chrome, FF4, and IE 5-9, you can see the operations per second in this chart:
Chrome is lightning fast when you use the DOM API, but vastly slower using the .innerHTML operator (by a magnitude 1000-fold slower), however, FF is worse than Chrome in some areas (for instance, the append test is much slower than Chrome), but the InnerHTML test runs much faster than chrome.
IE seems to actually be getting worse at using DOM append and better at innerHTML as you progress through versions since 5.5 (ie, 73ops/sec in IE8 now at 51 ops/sec in IE9).
I have the test page over here:
http://jsperf.com/browser-dom-speed-tests2
What's interesting is that it seems different browsers seem to all be having different challenges when generating the DOM. Why is there such disparity here?
When you change something in the DOM it can have myriad side-effects to do with recalculating layouts, style sheets etc.
This isn't the only reason: when you set element.innerHTML=x you are no longer dealing with ordinary "store a value here" variables, but with special objects which update a load of internal state in the browser when you set them.
The full implications of element.innerHTML=x are enormous. Rough overview:
parse x as HTML
ask browser extensions for permission
destroy existing child nodes of element
create child nodes
recompute styles which are defined in terms of parent-child relationships
recompute physical dimensions of page elements
notify browser extensions of the change
update Javascript variables which are handles to real DOM nodes
All these updates have to go through an API which bridges Javascript and the HTML engine. One reason that Javascript is so fast these days is that we compile it to some faster language or even machine code, masses of optimisations happen because the behaviour of the values is well-defined. When working through the DOM API, none of this is possible. Speedups elsewhere have left the DOM behind.
Firstly, anything you do to the DOM could be a user visible change. If you change the DOM, the browser has to lay everything out again. It could be faster, if the browser caches the changes, then only lays out every X ms (assuming it doesn't do this already), but perhaps there's not a huge demand for this kind of feature.
Second, innerHTML isn't a simple operation. It's a dirty hack that MS pushed, and other browsers adopted because it's so useful; but it's not part of the standard (IIRC). Using innerHTML, the browser has to parse the string, and convert it to a DOM. Parsing is hard.
Original test author is Hixie (http://nontroppo.org/timer/Hixie_DOM.html).
This issue has been discussed on StackOverflow here and Connect (bug-tracker) as well. With IE10, the issue is resolved. By resolved, I mean they have partially moved on to another way of updating DOM.
IE team seems to handle the DOM update similar to Excel-macros team at Microsoft, where it's considered a poor practice to update the live-cells on the sheet. You, the developer, is supposed to take the heavy lifting task offline and then update the live team in batch. In IE you are supposed to do that using document-fragment (as opposed to document). With new emerging ECMA and W3C standards, document-frags are depreciated. So IE team has done some pretty work to contain the issue.
It took them few weeks to strip it down from ~42,000 ms in IE10-ConsumerPreview to ~600 ms IE10-RTM. But it took lots of leg pulling to convince them that this IS an issue. Their claim was that there is no real-world example which has 10,000 updates per element. Since the scope and nature of rich-internet-applications (RIAs) can't be predicted, its vital to have performance close to the other browsers of the league. Here is another take on DOM by OP on MS Connect (in comments):
When I browse to http://nontroppo.org/timer/Hixie_DOM.html, it takes
~680ms and if I save the page and run it locally, it takes ~350ms!
Same thing happens if I use button-onclick event to run the script
(instead of body-onload). Compare these two versions:
jsfiddle.net/uAySs/ <-- body onload
vs.
jsfiddle.net/8Kagz/ <-- button onclick
Almost 2x difference..
Apparently, the underlying behavior of onload and onclick varies as well. It may get even better in future updates.
Actually, innerHTML is less slow than createElement.
In an effort to optimize I found js can parse enormous json effortlessly. Json parsers can have a huge number of nested function calls without issues. One can toggle between display:none and display:block thousands of elements without issues.
But if you try create a few thousand elements (or even if you simply clone them) performance is terrible. You don't even have to add them to the document!
Then, when they are created, insert and remove from the page works supper fast again.
It looks to me like the slowness has little to do with their relation to other elements of the page.

Dojox's JsonRestStore loading the same thing several times

I'm using a lazy loading tree in a web app project; however, I've ran into some strange behavior. It seems a simple tree with just 3 levels causes 7 requests for the root structure. After looking at the official JRS tree test, I'm not sure whether this is normal or not.
Have a look at this example:
http://download.dojotoolkit.org/release-1.6.1/dojo-release-1.6.1/dijit/tests/tree/Tree_with_JRS.html
When I visit it, my browser makes 5 requests for the root structure. My only question is why?
Edit: Worth mentioning is this doesn't happen with dojo 1.5 or below.
Here's what it looks like in the inspector (Chrome):
finally I found a solution to this problem, thanks to this post on dojo interest: thisisalink.
basically, with dojo 1.6 the dijit.tree.ForestStoreModel was extended with a few new hook-like functions (I guess because of the work done with the TreeGrid). One of these, onSetItem is called once a tree node is expanded (thus going form preLoaded to fully loaded when using a lazyLoading store). In the basic implementation, this function calls _requeryTop(), which requeries all root items.
for our application we could simply replace dijit.tree.ForestStoreModel with our implementation digicult.dijit.tree.ForestStoreModel, where onSetItem and onNewItem don't call this._requeryTop.
Sadly it's not enough to subclass the ForestStoreModel, as there are this.inherited(arguments); calls in the functions which can't be replaced easily, so we had to copy the whole class (copy class, rename, comment out two lines - easiest fix in a long time :-) ) - this may force us to redesign the class again once we update dojo to an even newer version.
I've also faced the performance problems with dijit Tree when having a tree with 10000+ nodes to be all loaded at once, with ~3000 items at the very top level.
The tree had only one dummy root node which loads the whole tree on the first click via ajax call.
In this case the tree creation took more than 1 minute to load and I got 'Stop running this script' dialog popup on IE8.
After applying a few optimization steps, the tree now loads within 2 seconds on all major browsers (IE8-IE11 included).
The first optimization I made was using dijit/tree/ObjectStoreModel as the tree's model and dojo/store/Memory as the data store.
This speeded up inserting the ajax response json nodes into the tree's data store.
The second optimization concerned the slow creation of the Tree's nodes. That took more efforts to fix:
I had to extend dijit/Tree and override the setChildItems() function (the part of it which calls _createTreeNode() function).
I kept the whole logic of the setChildItems() intact, just added parallelization of creating the tree nodes using this technique:
http://www.picnet.com.au/blogs/Guido/post/2010/03/04/How-to-prevent-Stop-running-this-script-message-in-browsers.aspx
Hope it helps, if needed, I can post the source code of my workaround

Ajax heavy JS apps using excessive amounts of memory over time

I seem to have some pretty large memory leaks in an app that I am working on. The app itself is not very complex. Every 15 seconds, the page requests approx 40kb of JSON from the server, and draws a table on the page using it. It is cheaper to draw the table over because the data is usually always new. I am attaching a few events to the table, approx 5 per line, 30 lines in the table. I used jQuery's .html() method to put the new html into the container and overwrite the existing. I do this specifically so that jQuery's special cleanup functions go in and attempt to detach all events on the elements in the element that it is overwriting. I then also delete the large variables of html once they are sent to the DOM using delete my_var.
I have checked for circular references and attached events that are never cleared a few times, but never REALLY dug into it. I was wondering if someone could give me a few pointers on how to optimize a very heavy app like this. I just picked up "High Performance Javascript" by Nicholas Zakas, but didn't have much time to get into it yet.
To give an idea on how much memory this is using, after 4~ hours, it is using about 420,000k on chrome, and much more on Firefox or IE.
Thanks!
I'd suggest writing a test version of your script without events. DOM / JS circular references might be very hard to spot. By eliminating some variables from the equation, you might be able to narrow down your search a bit.
I have experienced the same thing. I had a piece of code that polled every 10 seconds and retrieved a count of the current user's errors (data entry/auditing), a simple integer. This was then used to replace the text inside a div (so the user knew when new errors were found in their work). If left overnight, the browser would end up using well over 1gb of memory!
The closest I came to solving the issue was reducing the polling to occur every 2 minutes instead and insist the user closed down at the end of the day. A better solution still would have been to use Ajax Push Engine to push data to the page ONLY when an error was created. This would have resulted in data being sent less frequently and thus less memory being used.
Are you saving anything to an abject / array? I've had this happen before with a chrome plugin where an array just kept getting larger and larger. This sounds like it might be your problem, especially considering you're fetching 40k.
A snippet would be great it seems as if you are creating new variables each time and the old ones aren't going out of scope therefore not being garbage collected.
Also try and encapsulate more of your JS using constructors and instances of objects ect. When the JS is just a list of functions and all the variables have a global scope rather than being properties of an instance your JS can take up a lot memory.
Why not draw the table and attach events only once and just replace the table data every 15 second?
1) Jquery Ajax wrapper, called recurrently, leads to memory leaks and the community is aware of that (although the issue on the ajax wrapper is not by far as ugly as your case).
2) When it comes to optimization, you've done the first step(using lightweight json calls) and delete method, but that the problem is in the "event attaching" area and html method.
What I mean, is that:
1) you are probably reattaching the listener after each html() call
2) you re-draw the hole table on each ajax call.
This indeed leads to memory leaks.
You have to:
1)draw the table (with the first time content) on the server side
2)$(document).ready you attach listeners to the table's cells
3)call json service with ajax, parse response
4)refill table with the parsed array data
Tell us what you've achived meanwhile :)
I was having a similar problem just this week. It turned out I had a circular reference in my database. I had an inventory item ABC123 flagged as being replaced by XYZ321 and I also had XYZ321 flagged as being replced by ABC123. Sometimes the circular reference is not in the PHP code.

Categories

Resources