Jquery columnizer plugin: 40 times faster in Windows Firefox. Why? - javascript

I have a number of large texts which I'm using a lightly-stripped-down version of the columnizer jquery plugin (https://github.com/adamwulf/Columnizer-jQuery-Plugin) to turn into "columns" for use in another plugin. Columnizer is an OK performer for my purposes--as long as there is no floated content within the chunk being columnized.
Chrome, FF and IE10 all have similar performance on pure text or text with images and other simple html mixed in, for the most part. However, if you include floated content (images, in this case), things change dramatically:
Big Book w/ Images, roughly 700 columns created
Test condition Firefox (sec) Chrome (sec)
-----------------------------------------------------------------
Normal book build (images, floats) 31.5 1254.2
As above, but no images 23.2 18.9
w/ images, but no floated images 25.1 24.7
Only a few floated images 27.6 1010.9
Remove all images, tags except 'p' 21.3 18.9
As you can see, that is a huge, huge difference. (I do cache the builds, but because each browser/OS combo renders things slightly differently, I still have to build each one first in the "major" browsers. You haven't lived until you wait for Safari on an iPad to build this thing -- multiply the windows chrome numbers by 4.)
So my question: What is firefox doing "right" without being asked, and what can I do to re-work the columnizer code to mimic it in the other browsers? Columnizer is fairly "dirty" in that it does thousands (I think over 100,000 in this book's case) of appends, which I know is definitely Not Cool. Is it using document fragments? Some other trick?
Columnizer requires that the destination container (where it does its content flow) to be in the dom so that styles can be applied correctly (ie, no "display:none" and then toggling when done). In my CSS, I set this to position:absolute, visibility:hidden, as recommended. I'm thinking FF must view this set of attributes in a way the other browsers don't. Or...?
I should note at the end of the process, except for the slight font rendering differences, the output is identical among the browsers.

I'm not absolutely certain this answers my own question, but I did make things a lot better, which in some ways is answer enough for me. I'd love to actually learn why my solution made any difference.
Background: I do as much "pre-formatting" of the book's text in PHP as I can, as it is much much faster to do clean-up and other mundane text chores server-side. This cleaned-up chunk of html is then thrown into a div, which is what I columnize. This container (let's call it "raw"), and the empty destination container for the columns ("dest"), need to be in the dom, so I have this CSS on a wrapper div which has "raw" and "dest" as children:
position: absolute;
top: -2000px;
left: -2000px;
width: 700px;
height: 500px;
visibility: hidden;
overflow: hidden;
This 'removed' the two divs from the page (as far as our eyeballs are concerned), but they're still in the dom, allowing Columnizer to work with them.
Hmmm: In Firefox, this was enough to make things work well, floats or no. But in Chrome, Safari and IE...well, look at the table in my original question. Yuck.
But by adding "position: absolute" to "raw", the other browsers' performance was increased dramatically. They're not as fast as FF, but not far behind. Instead of 1200+ seconds, Chrome now takes 40 seconds when presented with the full book. The ipad, instead of taking a glacial epoch, takes a couple of minutes.
Why? It's a mystery to me. Here's what seem to be the pertinent bits within columnizer, during its prep:
...
var $sourceHtml = $('div.raw'),
$cache = $('<div></div>'),
$inBox = $('#dest'),
$destroyable, $col;
...
$cache.append($sourceHtml.contents().clone(true));
$inBox.empty();
$inBox.append($("<div style='width:" + options.width + "px;'></div>"));
$col = $inBox.children(":last");
$inBox.empty();
try {
$destroyable = $cache.clone(true);
} catch(e) {
$destroyable = $cache.clone();
}
...
That empty div created as the cache should be in the DOM but outside of the HTML page at this point, since it has not been appended to anything yet. Eh? So as it is manipulated, the page shouldn't need to be repainted.
My half-arsed theory is that while FF recognizes that, the other browsers don't, and consider $destroyable to be within the painted area of the page--perhaps appending it to the body tag or even to "wrap" (though watching through Chrome's inspector showed no such thing)?
As nodes are stripped from $destroyable and appended to the columns being created, the other browsers repaint the page every time $destroyable is altered. Speedy with block and inline stuff, expensive when floats are added.
But to poke a hole in that half-baked idea: Trying to add "position: absolute" to $destroyable makes no difference. It's only when the original "raw" div has that attribute that things speed up.
Anyway, back to heavy drinking and ignorance. Please sigh patiently and make this a teachable moment, if you can!

Related

Slow javascript execution in Iframe only in IE

The Problem:
I've developed a web application. It is embedded in a site with the help of an iFrame.
If I run the application as a stand alone (IE9) on say: www.example.com/webapp it loads in about ten seconds flat (it's a rather large application). Chrome and FF are much faster.
If It's embedded in an iFrame however, IE completely loses it with javascript execution times up to 40-60 seconds until the app is done loading. Once the application is loaded however there are no issues and it runs flawlessly.
Recap: Stand alone: OK, in iFrame: Not OK.
In the web application a few xml's are loaded, specifically a very large one which is about 8mb. The xml's are parsed and content is created using KnockoutJS. However this is not very relevant as I've narrowed it down to the XML parsing which is done with jQuery.
Stand alone the parsing takes about 10 seconds in IE9. Embedded it's around 40-60. I've consoled out the status logs and timestamps and I can physically see the javascript is running incredibly slow embedded. Every trace-out takes 4-6 times as long which corresponds with the increased overall load time.
FireFox and Chrome are immune and show no slowdown or so little slowdown that it's unnoticeable.
I've tried iFrame and Object embedding. Same results.
The question
Do you know why simple javascript execution (XML Parsing when the xml IS loaded and in memory), would take 4-6 times longer when embedded in an iframe than in stand alone?
Bonus info
I'm not talking about page load here. Everything loads fine. Even the host page. This is not yet another page is hanging until iframe is ready problem. the problem is the execution inside the iframe being slow. I've tried embedding on same domain, foreign domain, internal, external. Same problem everywhere. As soon as I iframe the damn thing, load performance goes to hell. Once it's loaded, everything is fine and everything runs very well.
PS: I hope the bolding of what i find is keywords is OK. It's supposed to be a help, not be annoying. I personally have problems focusing on large amounts of text.
**
Performance Monitor while it's loading:
IE9**
http://imgur.com/iYdMuPe
I found that setting element size with jQuery .height(n) and .width(n) can be extremly slow, you may use .css("width",x) and .css("height",x) instead.
First, hit F-12 and confirm the document mode is the same in both instances. If not, change the document mode of the outer frame to match..
If they are already the same, try instead to load the iFrame script dynamically after the outer page is complete. Older versions of IE handle resource allocation oddly and could be part of the problem.
Granted, not the answer to your question but bringing 8 MB of XML to the client is quite inefficient. Can any of this be stripped out or entirely processed server side?
Lastly, IE is slow to move and add DOM elements (compared to Chrome). Your best bet is to add them all at once. So if you are updating the UI as you parse the XML (instead of all at once after parsing), that will slow you down considerably.
Similar to what #ern0 said, if you are manipulating height and width in your script and are experiencing slowness then changing from using jQuery's .height() and .width() methods to vanilla JS could realize a significant performance improvement.
Getters
Here is a performance test for reading the element's current height. It shows that the vanilla JS property offsetHeight is significantly faster than the .height(), .css("height") and .style.height techniques.
The difference is so significant that it is not even a competition.
Setters
Here is a performance test for setting the element's current height. It shows that the vanilla JS property .style.height is significantly faster* than the .height(), and .css("height") methods.
Again, the difference is so significant that it is not even a competition.
Summary
The .style.height property excels in both getting and setting by an incredible margin, as compared to the jQuery methods. The read-only offsetHeight property is significantly faster than the style.height property for getting, but (as it is read-only) it cannot be used for setting the height. As such, it may be easier to just change the code to use .style.height, if it still achieves the desired effect.
The height and width properties and methods should be pretty much the same. If you want to add performance benchmarks for them too, that is fine, but you should get the same outcome, with the width properties and methods finishing in the same place as their corresponding height counterparts.
Apparently IE had a serious problem with getting attributes of an xml node through jQuery in a deeply nested loop. Changing this to pure JS reduced load time to about 15 seconds. Still not great, but much, much better!

Tables with fixed header, the current state of the art

It's a recurring question since, like, forever: how to keep the headers fixed on top when scrolling down a table?
Yes, I know that the web isn't designed to show long spreadsheet-like tables, but when a client asks for something like that (because s/he used to those old Excel documents...), you have no choice but comply.
This is to sum up the major problems and attempted solutions with fixing the headers of a table:
a <thead> element can be relatively postioned, but it won't move from its position;
with position: absolute or fixed, the head is "detached" from the rest of the table, and the column widths doesn't match anymore; moreover, you have to reserve some space above the table body;
this can be solved using a different table for just the headers, leaving the body in the other, but won't fix the problem of the different widths of the columns;
you can set the columns with fixed widths, but it's not always applicable depending on your needs - i.e. in case of dynamic or unknown content;
Javascript can adjust the widths of the columns, but accessing to properties like offsetWidth causes a reflow, which is a quite heavy task even for a browser like Chrome or Firefox, and adds an annoying rendering lag with even not-so-large tables but with dynamic content;
you can create a copy of the table, using the first to show the headers and the other to show the body, but this makes the DOM twice as heavy and should rely on Javascript to copy the changes of the content;
using <div>s with display: table-whatever won't actually solve a thing, and deprives you of the chance to add rowspans and colspans.
With the latest technologies of the web, maybe we can try something new. I've experimented a solution using transform: translate which behaves acceptably good (see the fiddle here), and keeps the space above the table body, and the columns width still match.
This of course won't work in IE8 and lower. But it works fine in Chrome, Safari and Firefox. It doesn't work in Opera 12.x and below, but it does work in Opera 15 thanks of its brand new Blink render engine. I've noticed this works better or at least as well with translate than with translate3d: it shows a bit less flickering.
Unfortunately, this doesn't work in IE9 and IE10, and the headers are kept in their positions. It's really a shame since there's no way to detect this behaviour (that I know of).
So, what are your most recent solutions for this problem? I'm looking for CSS and pure Javascript solutions.
I recently had to do this exact thing for a client and found it be next to impossible. I did find a really good library called SlickGrid, however when it came to translating that to mobile, it just wouldn't work (performance was too big of an issue). If mobile is not a concern of yours then I definitely suggest checking it out. https://github.com/mleibman/SlickGrid

Specific height for div in Mobile Windows 6.1

I always come to stackoverflow to check for answers; however, for the current question I have not found any relevant information yet.
I have a Mobile Windows 6.1 PDA and I want to create a simple HTML page for it. I want specific divs of the page to have a specific height, based on the text that are inside these divs.
Maximum I want 2 rows of text.
The text can contain HTML code. I want to slice the text but do not hurt the HTML code.
Before you think "there are 100 different solutions with CSS or Javascript for this" I would like to mention that 6.1 uses a mixture of IE4 with some features of IE5. The browser supports only CCS1 (so no max-height, no overflow:hidden, no position: absolute, no top, bottom etc).
Also the browser supports a very limited range of Javascript functions. I thought to parse the DOM of Javascript and constantly check if the text inside the div is bigger than 28pt (this is two rows) and cut it. However, most of the DOM functions do not work. createElement() does not work, appendChild either. Only getElementById and innerHTML work.
I found this solution https://code.google.com/p/cut-html-string/ for Javascript, which works perfectly with modern browsers, however, since it contains functions such createElement(), appendChild(), cloneNode() etc. it does not work with IE4. Work-around to the createElement() is the innerHTMl which works perfectly but then the browser reports errors for the DOM functions that the code uses.
P.S: Please do not answer "change PDA etc.". I know that the OS is very old but I have to use it.
Ok so if your only dealing with 1 style of device could you use a javascript viewport sniff maybe and use the inner html adjustments to adjust to a new style sheet so that you can have it work for said device, if you have to program for multiple devices you can create multiple style sheets. I would advise for testing purposes to see if anyone has an android or iOs device with similar screen sizes so you can use something like edge inspect from adobe creative cloud that you can adjust and see the changes.
Someone has a thing on viewport find here
Find the exact height and width of the viewport in a cross-browser way (no Prototype/jQuery)
that may be of use.
Since your dealing with basic javascript you should just be able to change the address of a css link and that may be a solution. If i mistook any of the functionality of the device im sorry however it has been a while since I had a windows mobile 6.x phone.

html page size problem with no of dom elements increase

Recently we redesigned one of our pages and suddenly page has been increased from 1MB to 1.98MB.
I compared the no of DOM elements and its increased from 1600 to 2300. I found the no of elements from the below command
document.getElementsByTagName('*').length
We did a load test and found the load time also increased from 1.1 to 2 seconds. Is this the reason for all problems.
I think the above line won't consider any inline css and js right , as they are not DOM elements.
Can you please suggest
Without knowing exactly what you redesigned, it's impossible to know what change caused the increase. But even a 1MB page is pretty large. JavaScript (and particularly jQuery) can change the number of DOM objects... consider this:
$('p').append('<span>Blah</span> <span>blah</span> <span>blah</span>');
That will add 3 DOM objects for each p tag on the page (which could be a lot!) and yet it adds only 71 bytes to your page. jQuery can similarly remove DOM objects. So I don't think the number of DOM objects is really much of a consideration.
The javascript that runs can manipulate the dom and create new nodes which would affect your count. However it shouldn't make the page load any slower as it's rendered on the client side.
I think you need to include more information if you expect to get a better answer.
Also you should look into browser plugins (for firefox) like Yslow, or firebug (net tab) that show you all the files being loaded and how long they load.
Anytime that you have more information crossing the wire, it will take longer. Therefore, with more DOM elements in the page, the loading time will be slower. I hope this answers your question because I'm not really sure of what you are actually asking.

Is there a unified way to know if a node is visible or not?

I'd like to be able to know if a node is visible and rendered on screen. As far as I know, there are at least 3 standard and easy ways of making HTML nodes not visible:
Setting opacity: 0;
Setting display: none;
Setting visibility: hidden.
I could check for just these three, but I'm afraid people can get creative when it comes to ways of hiding contents:
Sending the element offscreen using negative margins;
Using a width or height of 0 and hiding overflow;
many more I trust people to have developed.
So I was wondering if there is a standard way of determining if a node is rendered to the screen. I'm pretty sure all major browsers determine it for themselves to accelerate drawing, so maybe it's somehow exposed.
You might try using jQuery's :visible modifier.
http://api.jquery.com/visible-selector/
Unfortunately, I'm fairly sure that doesn't take into account any of the "tricky" cases that you are talking about.
If this is your page, then you can have most of the control and it becomes a matter of applying the standards you implement. If this is a forign page (e.g. if you're writing a bookmarklet), then the number of variables is extremely large.
Visibility means different things to people and browsers. The browser needs to know the context and layout of the page and whether an object takes up space, which is true even in the cases of opacity:0 and visibility:hidden, which would be why jQuery works that way.
So you would need to look at the particular element, including its margins, padding, overflow attributes, visibility, display, all the opacity settings, check for color:rgba(*,*,*,0) too I guess. Then you need to look at every parent object all the way back to the document.

Categories

Resources