document.createElement returning 'undefined' - javascript

I work on a Vaadin 8-based application. In a couple of hard-to-characterize scenarios - and I haven't been able to isolate the triggering factor - document.createElement starts to return undefined on all calls. This has been seen in both IE11 and Chrome (but in different circumstances in each case). My first theory was that it might be a browser out-of-memory issue, but I created a scenario with many more DOM elements that did not reproduce the error, and memory profiling showed no notable spike in memory usage at the point the problem happens. Also, when it happens it's at a predictable point in time - not random enough to be that sort of environmental issue.
When the problem happens, the console reports an odd status for the document.createElement function - it looks 'broken', but doesn't appear that it's just been clobbered by another function or something. Following is what the console shows under normal circumstances:
...while this is what shows after the problem occurs (plus a reference to a nonexistent attribute on document, to illustrate the difference between that and what createElement shows):
In Chrome the behavior in the console is similar:
Has anybody seen such a symptom in any browser and/or have any insight in tracking down the cause?
EDIT 17 January 2018: When I originally wrote this I only witnessed the problem behavior in IE11. Since then I have seen the same behavior under a different circumstance in Chrome.

My issue was that document.createElement was being replaced by some (buggy) injected anti-phishing JS I wasn't aware of. The problem with that JS is outside the scope of this question, but debugging tips provided in comments on the question were valuable in tracking it down:
The fact that document.createElement was being shadowed was discovered by noting that document.hasOwnProperty('createElement') returned true.
Defining a setter function for document.createElement that triggers the debugger helped me track down the offending code. I used the break-on-access snippet for this purpose, but also see simple code in this comment above for a roll-your-own alternative.

Related

in chrome devtools, is there any way to detect and break when a javascript variable has been modified?

in various unix/linux debugging tools it was possible to trap when a particular variable (memory location, internally) is modified in order to identify the offending culprit -- is there any equivalent in chrome devtools? setting a watch only has effect at someother break. setting a live-expression is nothing more than pooling. if your suggestion depends polling - this will not trap and identify the offending code. if your suggestion depends upon setting a breakpoint at a particular line of code - this, too, does not identify the bad actor.
context: trying to detect when an execution stack has been corrupted. the code manipulating the stack, has, as I said breakpoints set darn near everywhere the stack is being modified, (whereever there is a push/pop) but to no avail, so my delineating the code itself here will be of no use for this question.

Javascript Chrome profiler granularity - Go deeper

I'm currently debugging an Angular (JS) based app. I have some speed issue on runtime (client side) and want to analyze why.
I use the Devtool profiler from Chrome. I can see that some Events (i.e. keypress, blur) took a lot of time (see screenshot below).
Now I would like to go deeper and know which source code contains these event listeners and cause my application to slow down like this.
For information, the app is very slow when I write text in input, and when I focus/blur from my input; I know that some watchers could cause the slow down, but I'm not sure.
Hope deeper profiler analysis could help me !
--- Edit 25 feb 2020 ---
I think my problem is linked to digest cycle (apply/digest, etc).
I found this plugin : digest-hud. After several tries, it seems that a binding (which is used in a lot of components) called "source" is taking all digest resource :
Digest-hud was really helpful. I cannot find a way to know exactly how to find ha function initial calls on callstack. Like Kresimir Pendic said, probably a map issue.
But I found a lot of binding/watcher with source and one of them was called every single event of focus/blur/tipping. So I removed it, find an other way to signal changes within input and it works.
So don't hesitate to check with Digest-hud (disclaimer, I'm not releated in any way with Digest-hud developer(s)) if you have some performance issue with your AngularJS app, it'll give you some hints to solve the problem.

Dealing with stale elements when using WebDriver with Backbone.js

We are using Backbone.js and having issues when running our WebDriver tests. We are getting the following error:
org.openqa.selenium.StaleElementReferenceException: Error Message => 'Element does not exist in cache'
Our understanding is that this is caused when we are finding an element, and executing an action on that element (e.g. click()). The element that we have found has gone 'stale', and we suspect that element has been re-rendered or modified.
We have seen lots of solutions that we are not keen on:
Use Thread.Sleep(...). We don't want explicit sleeps in our code
Using a retry strategy, either as a loop or try-catching the StaleElementReferenceException. We feel this is not the right/clean solution, and is prone to breaking in the future
Some people are using WebDriverWait and waiting until some javascript function execution returns true. We have seen people wait for notifyWhenNoOutstandingRequests(callback) in Angular, but can't find anything obvious for Backbone.
We are hoping there is a clean solution that does not involve explicit sleeping, or some form of looping. Any thoughts?
I looked into WebDriverWaits a bit more and I think i've come up with a combination of expectations that works for us:
wait.until(refreshed(elementToBeClickable(...)));
The refreshed expectation is a wrapper for other expectations that deals with StaleElementReferenceException, and the elementToBeClickable expectation checks the element is clickable. What is interesting is that looking at the source for the built in expectations, some of them deal with StaleElementReferenceExceptions, while others don't (e.g. presenceOfElementLocated) and need to be wrapped in the refreshed expectation, so I think that's what initially threw me off when I first looked at WebDriverWaits.

JQuery: how to track caught exceptions

I am debugging the site with Chrome Developer Tools. If I check "pause on all exceptions", it pauses a few times when the site is loading and points to jquery.min.js(#line). These are only caught exceptions.
How can I track it back to see which function of my code causes the exception in jquery?
Also, should I really spend some time to track it down, if all my scripts function properly?
Thanks
Update. The problem is that I cannot see any of my functions in the call stack - only jquery calls:
Perhaps I can safely ignore these since all the exceptions are handled.
For issues like the one you're dealing with I find the printStackTrace method handy and keep it in my dev toolkit.
http://www.eriwen.com/javascript/js-stack-trace/
In a method where I'm having issues, I'll simply do the following:
var trace = printStackTrace();
console.log(trace);
I hope this might help you out. Good luck.
You can view the call stack in the debugger to see if your code caused the invoked code to throw an exception. Unfortunately, you may see some exceptions that were triggered within code running within a timer. Also, keep in mind that jQuery sometimes does a try..catch to detect browser traits, so you really should only be concerned with unhandled exceptions.

But why's the browser DOM still so slow after 10 years of effort?

The web browser DOM has been around since the late '90s, but it remains one of the largest constraints in performance/speed.
We have some of the world's most brilliant minds from Google, Mozilla, Microsoft, Opera, W3C, and various other organizations working on web technologies for all of us, so obviously this isn't a simple "Oh, we didn't optimize it" issue.
My question is if i were to work on the the part of a web browser that deals specifically with this, why would I have such a hard time making it run faster?
My question is not asking what makes it slow, it's asking why hasn't it become faster?
This seems to be against the grain of what's going on elsewhere, such as JS engines with performance near that of C++ code.
Example of quick script:
for (var i=0;i<=10000;i++){
someString = "foo";
}
Example of slow because of DOM:
for (var i=0;i<=10000;i++){
element.innerHTML = "foo";
}
Some details as per request:
After bench marking, it looks like it's not an unsolvable slow issue, but often the wrong tool is used, and the tool used depends on what you're doing cross-browser.
It looks like the DOM efficiency varies greatly between browsers, but my original presumption that the dom is slow and unsolvable seems to be wrong.
I ran tests against Chrome, FF4, and IE 5-9, you can see the operations per second in this chart:
Chrome is lightning fast when you use the DOM API, but vastly slower using the .innerHTML operator (by a magnitude 1000-fold slower), however, FF is worse than Chrome in some areas (for instance, the append test is much slower than Chrome), but the InnerHTML test runs much faster than chrome.
IE seems to actually be getting worse at using DOM append and better at innerHTML as you progress through versions since 5.5 (ie, 73ops/sec in IE8 now at 51 ops/sec in IE9).
I have the test page over here:
http://jsperf.com/browser-dom-speed-tests2
What's interesting is that it seems different browsers seem to all be having different challenges when generating the DOM. Why is there such disparity here?
When you change something in the DOM it can have myriad side-effects to do with recalculating layouts, style sheets etc.
This isn't the only reason: when you set element.innerHTML=x you are no longer dealing with ordinary "store a value here" variables, but with special objects which update a load of internal state in the browser when you set them.
The full implications of element.innerHTML=x are enormous. Rough overview:
parse x as HTML
ask browser extensions for permission
destroy existing child nodes of element
create child nodes
recompute styles which are defined in terms of parent-child relationships
recompute physical dimensions of page elements
notify browser extensions of the change
update Javascript variables which are handles to real DOM nodes
All these updates have to go through an API which bridges Javascript and the HTML engine. One reason that Javascript is so fast these days is that we compile it to some faster language or even machine code, masses of optimisations happen because the behaviour of the values is well-defined. When working through the DOM API, none of this is possible. Speedups elsewhere have left the DOM behind.
Firstly, anything you do to the DOM could be a user visible change. If you change the DOM, the browser has to lay everything out again. It could be faster, if the browser caches the changes, then only lays out every X ms (assuming it doesn't do this already), but perhaps there's not a huge demand for this kind of feature.
Second, innerHTML isn't a simple operation. It's a dirty hack that MS pushed, and other browsers adopted because it's so useful; but it's not part of the standard (IIRC). Using innerHTML, the browser has to parse the string, and convert it to a DOM. Parsing is hard.
Original test author is Hixie (http://nontroppo.org/timer/Hixie_DOM.html).
This issue has been discussed on StackOverflow here and Connect (bug-tracker) as well. With IE10, the issue is resolved. By resolved, I mean they have partially moved on to another way of updating DOM.
IE team seems to handle the DOM update similar to Excel-macros team at Microsoft, where it's considered a poor practice to update the live-cells on the sheet. You, the developer, is supposed to take the heavy lifting task offline and then update the live team in batch. In IE you are supposed to do that using document-fragment (as opposed to document). With new emerging ECMA and W3C standards, document-frags are depreciated. So IE team has done some pretty work to contain the issue.
It took them few weeks to strip it down from ~42,000 ms in IE10-ConsumerPreview to ~600 ms IE10-RTM. But it took lots of leg pulling to convince them that this IS an issue. Their claim was that there is no real-world example which has 10,000 updates per element. Since the scope and nature of rich-internet-applications (RIAs) can't be predicted, its vital to have performance close to the other browsers of the league. Here is another take on DOM by OP on MS Connect (in comments):
When I browse to http://nontroppo.org/timer/Hixie_DOM.html, it takes
~680ms and if I save the page and run it locally, it takes ~350ms!
Same thing happens if I use button-onclick event to run the script
(instead of body-onload). Compare these two versions:
jsfiddle.net/uAySs/ <-- body onload
vs.
jsfiddle.net/8Kagz/ <-- button onclick
Almost 2x difference..
Apparently, the underlying behavior of onload and onclick varies as well. It may get even better in future updates.
Actually, innerHTML is less slow than createElement.
In an effort to optimize I found js can parse enormous json effortlessly. Json parsers can have a huge number of nested function calls without issues. One can toggle between display:none and display:block thousands of elements without issues.
But if you try create a few thousand elements (or even if you simply clone them) performance is terrible. You don't even have to add them to the document!
Then, when they are created, insert and remove from the page works supper fast again.
It looks to me like the slowness has little to do with their relation to other elements of the page.

Categories

Resources