Slow javascript execution in Iframe only in IE - javascript

The Problem:
I've developed a web application. It is embedded in a site with the help of an iFrame.
If I run the application as a stand alone (IE9) on say: www.example.com/webapp it loads in about ten seconds flat (it's a rather large application). Chrome and FF are much faster.
If It's embedded in an iFrame however, IE completely loses it with javascript execution times up to 40-60 seconds until the app is done loading. Once the application is loaded however there are no issues and it runs flawlessly.
Recap: Stand alone: OK, in iFrame: Not OK.
In the web application a few xml's are loaded, specifically a very large one which is about 8mb. The xml's are parsed and content is created using KnockoutJS. However this is not very relevant as I've narrowed it down to the XML parsing which is done with jQuery.
Stand alone the parsing takes about 10 seconds in IE9. Embedded it's around 40-60. I've consoled out the status logs and timestamps and I can physically see the javascript is running incredibly slow embedded. Every trace-out takes 4-6 times as long which corresponds with the increased overall load time.
FireFox and Chrome are immune and show no slowdown or so little slowdown that it's unnoticeable.
I've tried iFrame and Object embedding. Same results.
The question
Do you know why simple javascript execution (XML Parsing when the xml IS loaded and in memory), would take 4-6 times longer when embedded in an iframe than in stand alone?
Bonus info
I'm not talking about page load here. Everything loads fine. Even the host page. This is not yet another page is hanging until iframe is ready problem. the problem is the execution inside the iframe being slow. I've tried embedding on same domain, foreign domain, internal, external. Same problem everywhere. As soon as I iframe the damn thing, load performance goes to hell. Once it's loaded, everything is fine and everything runs very well.
PS: I hope the bolding of what i find is keywords is OK. It's supposed to be a help, not be annoying. I personally have problems focusing on large amounts of text.
**
Performance Monitor while it's loading:
IE9**
http://imgur.com/iYdMuPe

I found that setting element size with jQuery .height(n) and .width(n) can be extremly slow, you may use .css("width",x) and .css("height",x) instead.

First, hit F-12 and confirm the document mode is the same in both instances. If not, change the document mode of the outer frame to match..
If they are already the same, try instead to load the iFrame script dynamically after the outer page is complete. Older versions of IE handle resource allocation oddly and could be part of the problem.
Granted, not the answer to your question but bringing 8 MB of XML to the client is quite inefficient. Can any of this be stripped out or entirely processed server side?
Lastly, IE is slow to move and add DOM elements (compared to Chrome). Your best bet is to add them all at once. So if you are updating the UI as you parse the XML (instead of all at once after parsing), that will slow you down considerably.

Similar to what #ern0 said, if you are manipulating height and width in your script and are experiencing slowness then changing from using jQuery's .height() and .width() methods to vanilla JS could realize a significant performance improvement.
Getters
Here is a performance test for reading the element's current height. It shows that the vanilla JS property offsetHeight is significantly faster than the .height(), .css("height") and .style.height techniques.
The difference is so significant that it is not even a competition.
Setters
Here is a performance test for setting the element's current height. It shows that the vanilla JS property .style.height is significantly faster* than the .height(), and .css("height") methods.
Again, the difference is so significant that it is not even a competition.
Summary
The .style.height property excels in both getting and setting by an incredible margin, as compared to the jQuery methods. The read-only offsetHeight property is significantly faster than the style.height property for getting, but (as it is read-only) it cannot be used for setting the height. As such, it may be easier to just change the code to use .style.height, if it still achieves the desired effect.
The height and width properties and methods should be pretty much the same. If you want to add performance benchmarks for them too, that is fine, but you should get the same outcome, with the width properties and methods finishing in the same place as their corresponding height counterparts.

Apparently IE had a serious problem with getting attributes of an xml node through jQuery in a deeply nested loop. Changing this to pure JS reduced load time to about 15 seconds. Still not great, but much, much better!

Related

Mobile Safari on iOS crashes on big pages

I have a problem where Mobile Safari crashes when loading and manipulating the DOM with jQuery when the pages get too big.
I get the same problem on both iPhone and iPad.
What are the best way to troubleshoot mobile pages to find the error? Are there any known problems that might crash Mobile Safari?
I actually found the problem. It wasn't with JS as I thought, but with the CSS. I added class to make a CSS transition to fade in some elements. For anonymous users these elements had display: none; and probably never ran the opacity transition.
The strange thing is that the transitions was on exactly two elements. So why would this only crash on long threads with 100+ comments?
So the bottom line is: -webkit-transition crashed the page on mobile safari.
Had the same issue, for me it was -webkit-transform: translateZ(0); that caused the crash of Safari.
I know this question has been successfully answered but I just wanted to put my five cents in too as I have been banging my head against the wall over this one quite a few times:
As most answers have pointed out already it usually comes down to memory issues. Almost anything can be the last bit that finally tips over the "memory pile" much like a translateZ or anything else.
However in my experience it has nothing to do with the actual CSS (or JS) command in specific. It just happens to be that the last transition was one too much.
What tends to help me a lot is to keep anything that is not visible at this time under display: none. This might sound primitive but actually does the trick. It's a simple way to tell the renderer of the browser that you don't need this element at this time and therefore releases memory. This allows you to create mile long vertical scrollers with all sorts of 3d effects as long as you hide elements that you are not using at this time.
The main issue with any iOS app is memory usage. So, it is likely that your pages are using too much memory.
Mobile Safari use some clever technique so that not the whole page has to reside in memory at any given time, only a portion of it. Maybe you could check if anything in your page defeats this mechanism or makes it less effective.
In any case, to give more up to the point suggestions, more information about your page would be really great.
By the way, you could have some hints from the crash log that the device store. Check to see if you can find it under Settings:
General
About
Diagnostics & Usage
Diagnostics & Usage Data
If it's a memory problem, you should find something like "signal (0)"; not sure if this can only mean "killed due to memory usage", but I usually take this for granted when I found a signal (0).
Otherwise, it might tell you what is wrong...
Hope this helps.
There are both memory limits and Javascript execution time limits, though it's a little fuzzy as to how you may actually hit those. Common reports come in that a page with a size greater than 10MB will have issues, and Javascript execution is limited to 10 seconds.
See Apple's documentation for more info.
I recently had an issue with mobile safari crashing on web-app pages containing a lot of images (30+ were enough) and a few transformations for the menu. After a lot of trial and error, I settled with a solution similar to what LinkedIn does, but for infinite scrolling using angularjs. I used requestAnimFrame and two expanding/shrinking divs (with js style attributes) on the top and bottom of the list to remove all image containers (with other stuff overlayed on top) except for a few ones which are close to the viewport. Scrolling performance (native, no js) is fine and memory consumption is in check.
I had a similar problem, the web page worked like a charm on android devices and crashed on IOS (iphone and simulator).
After a lot of research (using also the ios_webkit_debug_proxy) I found that the problem was connected to the jQuery ready event.
Adding a little timeout solved the problem. My application was also using iframes.
$(document).ready(function () {
window.setTimeout(function () { ready(); }, 10);
});

Unloading Resources on HTML with JavaScript

I'm working on a HTML 5 game, it is already online, but it's currently small and everything is okay.
Thing is, as it grows, it's going to be loading many, many images, music, sound effects and more. After 15 minutes of playing the game, at least 100 different resources might have been loaded already. Since it's an HTML5 App, it never refreshes the page during the game, so they all stack in the background.
I've noticed that every resource I load - on WebKit at least, using the Web Inspector - remains there once I remove the <img>, the <link> to the CSS and else. I'm guessing it's still in memory, just not being used, right?
This would end up consuming a lot of RAM eventually, and lead to a downgrade in performance specially on iOS and Android mobiles (which I slightly notice already on the current version), whose resources are more limited than desktop computers.
My question is: Is it possible to fully unload a Resource, freeing space in the RAM, through JavaScript? Without having to refresh the whole page to "clean it".
Worst scenario: Would using frames help, by deleting a frame, to free those frames' resources?.
Thank you!
Your description implies you have fully removed all references to the resources. The behavior you are seeing, then, is simply the garbage collector not having been invoked to clean the space, which is common in javascript implementations until "necessary". Setting to null or calling delete will usually do no better.
As a common case, you can typically call CollectGarbage() during scene loads/unloads to force the collection process. This is typically the best solution when the data will be loaded for game "stages", as that is a time that is not time critical. You usually do not want the collector to invoke during gameplay unless it is not a very real-time game.
Frames are usually a difficult solution if you want to keep certain resources around for common game controls. You need to consider whether you are refreshing entire resources or just certain resources.
All you can do is rely on JavaScript's built in garbage collection mechanism.
This kicks in whenever there is no reference to your image.
So assuming you have a reference pointer for each image, if you use:
img.destroy()
or
img.parentNode.removeChild(img)
Worth checking out: http://www.ibm.com/developerworks/web/library/wa-memleak/
Also: Need help using this function to destroy an item on canvas using javascript
EDIT
Here is some code that allows you to load an image into a var.
<script language = "JavaScript">
var heavyImage = new Image();
heavyImage.src = "heavyimagefile.jpg";
......
heavyImage = null; // removes reference and frees up memory
</script>
This is better that using JQuery .load() becuase it gives you more control over image references, and they will be removed from memory if the reference is gone (null)
Taken from: http://www.techrepublic.com/article/preloading-and-the-javascript-image-object/5214317
Hope it helps!
There are 2 better ways to load images besides a normal <img> tag, which Google brilliantly discusses here:
http://www.youtube.com/watch?v=7pCh62wr6m0&list=UU_x5XG1OV2P6uZZ5FSM9Ttw&index=74
Loading the images in through an HTML5 <canvas> which is way way faster. I would really watch that video and implement these methods for more speed. I would imagine garbage collection with canvas would function better because it's breaking away from the DOM.
Embedded data urls, where the src attribute of an image tag is the actual binary data of the image (yeah it's a giant string). It starts like this: src="data:image/jpeg;base64,/9j/MASSIVE-STRING ... " After using this, you would of course want to use a method to remove this node as discussed in the other answers. (I don't know how to generate this base64 string, try Google or the video)
You said Worst scenario: Would using frames help, by deleting a frame, to free those frames' resources
It is good to use frame. Yes, it can free up resource by deleting the frames.
All right, so I've made my tests by loading 3 different HTML into an < article > tag. Each HTML had many, huge images. Somewhat about 15 huge images per "page".
So I used jQuery.load() function to insert each in the tag. Also had an extra HTML that only had an < h1 >, to see what happened when a page with no images was replacing the previous page.
Well, turns out the RAM goes bigger while you start scrolling, and shoots up when going through a particularly big image (big as in dimensions and size, not just size). But once you leave that behind and lighter images come to screen, the RAM consumption actually goes down. And whenever I replaced using JS the content of the page, the RAM consumption went really down when it was occupying to much. Virtual Memory remained always high and rarely went down.
So I guess the browser is quite smart about handling Resources. It does not seem to unload them if you leave it there for a long while, but as soon you start loading other pages or scrolling, it starts loading / freeing up.
I guess I don't have anything to worry about after all...
Thanks everyone! =)

Javascript performance problems with too many dom nodes?

I'm currently debugging a ajax chat that just endlessly fills the page with DOM-elements. If you have a chat going for like 3 hours you will end up with god nows how many thousands of DOM-nodes.
What are the problems related to extreme DOM Usage?
Is it possible that the UI becomes totally unresponsive (especially in Internet Explorer)?
(And related to this question is off course the solution, If there are any other solutions other than manual garbage collection and removal of dom nodes.)
Most modern browser should be able to deal pretty well with huge DOM trees. And "most" usually doesn't include IE.
So yes, your browser can become unresponsive (because it needs too much RAM -> swapping) or because it's renderer is just overwhelmed.
The standard solution is to drop elements, say after the page has 10'000 lines worth of chat. Even 100'000 lines shouldn't be a big problem. But I'd start to feel uneasy for numbers much larger than that (say millions of lines).
[EDIT] Another problem is memory leaks. Even though JS uses garbage collection, if you make a mistake in your code and keep references to deleted DOM elements in global variables (or objects references from a global variable), you can run out of memory even though the page itself contains only a few thousand elements.
Just having lots of DOM nodes shouldn't be much of an issue (unless the client is short on RAM); however, manipulating lots of DOM nodes will be pretty slow. For example, looping through a group of elements and changing the background color of each is fine if you're doing this to 100 elements, but may take a while if you're doing it on 100,000. Also, some old browsers have problems when working with a huge DOM tree--for example, scrolling through a table with hundreds of thousands of rows may be unacceptably slow.
A good solution to this is to buffer the view. Basically, you only show the elements that are visible on the screen at any given moment, and when the user scrolls, you remove the elements that get hidden, and show the ones that get revealed. This way, the number of DOM nodes in the tree is relatively constant, but you don't really lose anything.
Another similar solution to this is to implement a cap on the number of messages that are shown at any given time. This way, any messages past, say, 100 get removed, and to see them you need to click a button or link that shows more. This is sort of what Facebook does with their profiles, if you need a reference.
Problems with extreme DOM usage can boil down to performance. DOM scripting is very expensive, so constantly accessing and manipulating the DOM can result in a poor performance (and user experience), particularly when the number of elements becomes very large.
Consider HTML collections such as document.getElementsByTagName('div'), for example. This is a query against the document and it will be reexecuted every time up-to-date information is required, such as the collection's length. This could lead to inefficiencies. The worst cases will occur when accessing and manipulating collections inside loops.
There are many considerations and examples, but like anything it depends on the application.

html page size problem with no of dom elements increase

Recently we redesigned one of our pages and suddenly page has been increased from 1MB to 1.98MB.
I compared the no of DOM elements and its increased from 1600 to 2300. I found the no of elements from the below command
document.getElementsByTagName('*').length
We did a load test and found the load time also increased from 1.1 to 2 seconds. Is this the reason for all problems.
I think the above line won't consider any inline css and js right , as they are not DOM elements.
Can you please suggest
Without knowing exactly what you redesigned, it's impossible to know what change caused the increase. But even a 1MB page is pretty large. JavaScript (and particularly jQuery) can change the number of DOM objects... consider this:
$('p').append('<span>Blah</span> <span>blah</span> <span>blah</span>');
That will add 3 DOM objects for each p tag on the page (which could be a lot!) and yet it adds only 71 bytes to your page. jQuery can similarly remove DOM objects. So I don't think the number of DOM objects is really much of a consideration.
The javascript that runs can manipulate the dom and create new nodes which would affect your count. However it shouldn't make the page load any slower as it's rendered on the client side.
I think you need to include more information if you expect to get a better answer.
Also you should look into browser plugins (for firefox) like Yslow, or firebug (net tab) that show you all the files being loaded and how long they load.
Anytime that you have more information crossing the wire, it will take longer. Therefore, with more DOM elements in the page, the loading time will be slower. I hope this answers your question because I'm not really sure of what you are actually asking.

IE 6 performance vs. clean, unobtrusive Javascript

Normally I love to keep my HTML code clean, semantic and free from either Javascript or CSS. I include my .JS and .CSS files at the top, and layer functionality on top of the DOM elements.
The positives of this are:
Architectural separation of concerns
Graceful degradation where Javascript or CSS isn't supported
Search-engine friendliness
There is one major negative:
Performance problems in IE 6
Because all the events are attached to the elements through Javascript code, which accesses the DOM, performance in IE suffers.
This is especially so when using jQuery (which happens to be my favorite Javascript framework).
So it seems like I have two choices: either keep the code nice and neat, and have IE 6 users (around 20% of the user-base) complain, or "de-normalize" the code to improve IE 6 performance.
Is there a "middle way" in this situation? Or am I doomed?
Note: I'm not saying that my performance problems are caused by having Javascript in a separate file.
I can achieve wonderful performance in IE while keeping it in a separate file.
The problem is, I still have to put the actual event handlers into 'onclick' attributes in the HTML. For example:
<span onclick="doSomething()">More...</span>
It would be much nicer if I could just write:
<span id="more-button">More...</span>
And then assign it separately, in Javascript, with the following:
$("#more-button").click(doSomething);
Unfortunately, it seems this is bad for IE6 performance.
In a talk available at YUI Theater, Joseph Smarr talks about performance and states the opinion that it is ok to have onmousedown handlers and the like in your code for performance reasons instead of finding the DOM element and then attaching the event handler. However, I do not know any measures by which that speeds up things. Other techniques for unobtrusive performance boosts are (EIDT: note that they are browser independent, hence not restricted to IE):
Use onmousedown instead of onclick. Smarr says there is a 100ms or so period between the firing of the two events.
For perceived performance: Give instant visual feedback by using window.setTimeout(visualFeedback, 0). That lets the browser draw some graphical change instantly, letting the user know that something is happening.
When you have to traverse the DOM, cache intermediate results in a variable to reuse them (guess you know that one).
There is also a demo that shows the benefit of the first two points. The button at the top uses onclick and normal execution, the one below uses onmousedown and setTimeout(func, 0).
You should actually put your JavaScript at the bottom of the page. As far as performance issues around attaching events, make sure to measure carefully before optimizing. Dynatrace AJAX edition is a great tool for measuring and understanding IE performane.
I don't think your performance problems are because you put your javascript code into separate files. I do that... a lot of people do that... It's considered best practice to do that.
JQuery has some performance issues. That may be one place to look. The other place you need to look is at your javascript code itself. Without seeing that, I wouldn't be able to help you make your application perform better.
If your code runs on browsers, and just performans bad on IE6, then just feel lucky!
Really, ignore this old beast!
If your code runs perfect on most browsers, and you have to make it slow because IE6 can not run it, then you should display a message to those IE6 users (or just live with bad performance on every browser).
But your situation seems simple and straightforward to me. Do not matter! IE6 is and should be dead!!! Even some big websites have begun to display update notices to IE6 users. Instead of their usual content, they do not support it anymore, because it is crap!

Categories

Resources