Does 'display:none' improve or worsen performance? - javascript

I have a page with a lot of vertical scrolling and thousands of DOM elements. For improving performance, I thought about setting display: none; to the content of the divs above and below the viewport, that is, the divs which are not visible (while keeping their heights, obviously):
In order to check if my idea makes any sense I searched SO and I found this question. According to the comments and to the accepted answer, the best strategy is do nothing, since display: none; triggers reflow and may have the opposite effect:
Setting display to none triggers reflow which is completely opposite of what you want if what you want is to avoid reflow. Not doing anything doesn't trigger reflow. Setting visibility to hidden will also not trigger reflow. However, not doing anything a much easier.
However, there is a recent answer (which unfortunately seems more like a comment or even a question) that claims that display: none; is the current strategy used by sites like Facebook, where the vertical scroll is almost infinite.
It's worth mentioning that, unlike OP's description in that question, each visible div in my site is interactive: the user can click, drag, and do other stuff with the div's contents (which, I believe, makes the browser repainting the page).
Given all these information, my question is: does display: none; applied to the divs above/below the viewport improve performance or does it worsen performance? Or maybe it has no effect?

The "display: none" property of an Element removes that element from the document flow.
Redefining that element display property from none to any other dynamically, and vice versa, will again force the change in document flow.
Each time requiring a recalculation of all elements under the stream cascade for new rendering.
So yes, a "display: none" property applied to a nonzero dimensional and free flow or relatively positioned element, will be a costly operation and therefore will worsen the performance!
This will not be the case for say position: absolute or otherwise, removed elements form the natural and free document flow who's display property may be set to none and back without triggering e re-flow on the body of the document.
Now in your specific case [see edited graph] as you move/scroll down bringing the 'display: block' back to the following div will not cause a re-flow to the rest of the upper part of the document. So it is safe to make them displayable as you go. Therefore will not impact the page performance. Also display: none of tail elements as you move up, as this will free more display memory. And therefore may improve the performance.
Which is never the case when adding or removing elements from and within the upper part of the HTML stream!

The answer is, like just about everything, it depends. I think you're going to have to benchmark it yourself to understand the specific situation. Here's how to run a "smoothness" benchmark since the perception of speed is likely more important than actual system performance for you.
As others have stated display:none leaves the DOM in memory. Typically the rendering is the expensive part but that's based on how many elements have to be rendered when things change. If the repaint operation is still having to check every element you may not see a huge performance increase. Here are some other options to consider.
Use Virtual DOM
This is why frameworks like React & Vue use a Virtual DOM. The goal is to take over the browser's job of deciding what to update and only making smaller changes.
Fully Add/Remove elements
You could replicate something similar by using Intersection Observer to figure out what's in/out of the viewport and actually add/subtract from the DOM instead of just relying on display:none alone since parsing javascript is generally more efficient than large paints.
Add GPU Acceleration
On the flip side, if the GPU is taking over rendering the paint might not be a performance suck but that's only on some devices. You can try it by adding transform: translate3d(0,0,0); to force GPU acceleration.
Give the Browser Hints
You may also see an improvement by utilizing CSS Will-Change attribute. One of the inputs is based on the content being outside the viewport. So will-change:scroll-position; on the elements.
CSS Content-Visibility (The bleeding edge)
The CSS Working group at W3C has the CSS containment module in draft form. The idea is to allow the developer to tell the browser what to paint and when. This includes paint & layout containment. content-visibility:auto is a super helpful property designed for just this type of problem. Here's more background.
Edit (April 2021) this is now available in Chrome 85+, Edge (Chromium) 85+, and Opera 71+. We're still waiting on Firefox support, but Can I Use puts it a 65% coverage.
It's worth a look as the demos I saw made a massive difference in performance and Lighthouse scores.

Two tips for improving performance when you have thousands of DOM elements and they needs to be scrolled through, interact etc.
Try to manage bindings provided by front end frameworks manually. Front end frameworks might need lot of additional processing for the simple data binding you need. They are good up to certain number of DOM elements. But if your case is special and exceeds the number of DOM elements in an average case, the way to go is to manually bind them considering the circumstance. This can certainly remove any lagging.
Buffer the DOM elements in and around the view port. If your DOM elements are a representation of the data in a table(s), fetch them with a limit and only render what you fetched. The user scrolling action should make the fetching and rendering going upwards or downwards.
Just hiding the elements is definitely not going to solve your performance problem coming by having thousands of DOM elements. Even though you can't see them, they occupy the DOM tree and the memory too. Only the browser doesn't have to paint them.
Here are some articles:
https://codeburst.io/taming-huge-collections-of-dom-nodes-bebafdba332
https://areknawo.com/dom-performance-case-study/

To add to the already posted answers.
The key takeaways from my testing:
setting an element to display: none; decreases ram usage
elements that are not displayed are not affected by layout shifts and therefore have no (or very little) performance cost in this regard
Firefox is way better at handling lots of elements (~x50)
Also try to minimize layout changes.
Here is my test setup:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<style>
body {
margin: 0;
}
.testEl {
width: 100%;
height: 10px;
}
</style>
</head>
<body>
<main id="main"></main>
</body>
<script>
// firefox Max
// const elementCount = 12200000;
// chrome Max
const elementCount = 231000;
const main = document.getElementById("main");
const onlyShowFirst1000 = true;
let _content = ""
for (let i = 0; i < elementCount; i++) {
_content += `<div class="testEl" style="background-color: hsl(${(Math.random() * 360)|0}, 100%, 50%); display: ${!onlyShowFirst1000 || i < 1000 ? "block" : "none"}"></div>`;
}
main.innerHTML = _content;
const addOneEnd = () => {
const newEl = document.createElement("div");
newEl.classList.add("testEl");
newEl.style.backgroundColor = `hsl(${(Math.random() * 360)|0}, 100%, 50%)`
requestAnimationFrame(() => {
main.appendChild(newEl);
})
};
const addOneBeginning = () => {
const newEl = document.createElement("div");
newEl.classList.add("testEl");
newEl.style.backgroundColor = `hsl(${(Math.random() * 360)|0}, 100%, 50%)`
requestAnimationFrame(() => {
main.insertBefore(newEl, main.firstChild);
})
};
const loop = (front = true) => {
front ? addOneBeginning() : addOneEnd();
setTimeout(() => loop(front), 100);
};
</script>
</html>
I create a lot of elements and have the option to only display the first 1000 of them using the onlyShowFirst1000 flag. when displaying all elements, firefox allowed up to ~12200000 elements (using 10gb of my RAM) and chrome up to ~231000.
Memory usage (at 231000 elements):
+----------+----------+-------------+
| false | true | reduction % |
+---------+----------+----------+-------------+
| Chrome | 415,764k | 243,096k | 42% |
+---------+----------+----------+-------------+
| Firefox | 169.9MB | 105.7MB | 38% |
+---------+----------+----------+-------------+
Changing the display property of an element to or from none causes the area to be repainted, but the area of your element will usually be relatively small, therefore the performance cost will also be small.
But depending on your layout, the display change might also cause a layout shift which could be quite costly since it would cause a big part of your page to repaint.
In the future (e.g. chrome 85) you will also be able to use the content-visibility property to tell the browser which elements dont have to be rendered.
Also, you set the browser to show repaints using the dev tools, for chrome open the rendering tab and check "Paint flashing".

The strategy of "virtual scrolling" is remove the HTML element when it's out of viewport, this improve the performance because reduce the number of the elements into the dom and reduce the time for repaint/reflow all the document.
Display none don't reduce the size of the dom, just make the element not visible, like visible hidden, without occupies the visual space.
Display none don't improve the performance because the goal of virtual scrolling is reduce the number of the elements into the dom.
Make an element display none or remove an element, trigger a reflow, but with display none you worst the performance because you don't have the benefit of reduce the dom.
About performance, display none is just like visible hidden.
Google Lighthouse flags as bad performance pages with DOM trees that:
Have more than 1,500 nodes total
Have a depth greater than 32 nodes
Have a parent node with more than 60 child nodes
In general, look for ways to create DOM nodes only when needed, and destroy nodes when they're no longer needed.
The only one benefit of display none is: will not cause a repaint or reflow when change.
Source:
https://web.dev/dom-size/
https://developers.google.com/speed/docs/insights/browser-reflow

Related

How can I compensate for longer load times when dynamically setting div dimensions with CSS and JS?

I am creating a Polymer app which displays information in rows in a custom element "table". I am using polymer flexbox classes to initially position elements. I need to take measurements with JS after the elements have loaded and assign widths to elements in the table accordingly. The problem goes something like this:
row 1
*--------------------*
| *-----------*|
| | el a ||
| *-----------*|
*--------------------*
row 2
*--------------------*
| *------*|
| | el b ||
| *------*|
*--------------------*
Row 1 and row 2 have fixed widths, but el a or bcould be of any width. Regardless of contents of el a or b, I want the el's contained in the rows to be all the same width as the widest. So in this case, a is widest so el b should adjust its width to match a.
I am using polymers attached method to ensure the elements are loaded before taking scrollWidth measurements via JS. I initially ran into the problem of getting undefined elements when I tried to grab them with querySelector, but I fixed this using Polymer's async. I can style the elements fine in a modern browser just fine. Consider the following code.
attached: {
this.async(function() {
console.log(Polymer.dom(this.root).querySelector('#a').scrollWidth);
console.log(Polymer.dom(this.root).querySelector('#b').scrollWidth);
}, 100);
}
A modern version of chrome will return something like 100px for a and 60px for b. If I test an older version of chrome or a different browser like firefox, a and b will be 5-15px less than what modern chrome measured. I found out that if I increase the async time enough, no matter the age of the browser, I will eventually get a measurement matching what modern chrome returned. This is to say while it appears the div exists on the page and I can grab it with querySelector, it seems to be fluctuating so that it is not at its full width yet.
I don't like guessing how long it will take for the elements to fully load before I can take measurements. I am trying to figure out a way I can be 100% confident that the elements are done loading or perhaps an alternate method of styling these elements I have overlooked. I have had the most success with using setTimeout on the function that is doing the measuring and waiting for it to measure the same thing several times in a row. But even that has been inconsistent at times and occasionally b will be slightly less wide than a when the web page appears.
One option when you need to keep checking things, even if it's been true a couple times, is to continue to use setTimeout infinitely until the values are no longer changing for a certian # of iterations. For example, if they are still the same after 10 timeouts, stop the timeouts from going all together.
something like...
var count = 0;
function check() {
do_something();
if (same_as_last_time == true) {count++}
if (count < 10) setTimeout (check,100);
}
setTimeout (check,100);
Although this could be wildly inefficient and possibly ineffective depending on how staggered your data is appearing.
I believe the actual solution here lies in the fact that you are using the variable ".scrollWidth" as your measurement point, which is a terrible measurement to standardize your widths across browsers. See this SO post for more info on that (https://stackoverflow.com/a/33672058/3479741)
I would recommend instead using a different method of acquiring the width and compare to see whether your results are more consistent. There are many options that remain more consistent than ".scrollWidth" in the article above.

When is Element.getBoundingClientRect guaranteed to be updated / accurate?

I am working on some code that uses Element.getBoundingClientRect (gBCR), coupled with inline style updates, to perform calculation. This is not for a general website and I am not concerned or interested in if there are "better CSS ways" of doing this task.
The JavaScript is run synchronously and performs these steps:
The parent's gBCR is fetched
Calculations are performed and;
A child element of the parent has inline CSS styles (eg. size and margins) updated
The parent's gBCR is fetched again
Am I guaranteed that the computed client bounds will reflect the new bounding rectangle of the parent at step 4?
If not guaranteed by a specification, is this "guaranteed" by modern1 browser implementations? If "mostly guaranteed", what notable exceptions are there?
Elements are not being added to or removed from the DOM and the elements being modified are direct children of the parent node; if such restrictions / information is relevant.
1"Modern": UIWebView (iOS 6+), WebView (Android 2+), and the usual Chrome/WebKit, FF, IE9+ suspects - including mobile versions.
I'm just stuck at gBCR unreliability on ios8.4.1/Safari8.0.
Prepare a large div on top of body (gBCR is 0) and scroll to bottom (gBCR is negative). Resize the div into 1x1 then window.scrollY automatically goes 0. gBCR should also be 0 but still stay negative value. With setTimeout, 200ms later, you can confirm the right value 0.
Old question, still the problem puzzled me and in my searches I had tumbled on this question. It might help others.
The best guarantee that I could find to make getBoundingClientRect() work reliably is to force a refresh at the top of the window, calculate the positions, and then go back wherever the user was.
Code would look something like:
scroll_pos = document.documentElement.scrollTop // save current position
window.scrollTo(0, 0); // go up
v_align = parseInt(el.getBoundingClientRect().top) // example of gBCR for vert.alignment
//... whatever other code you might need
window.scrollTo(0, scroll_pos); // get back to the starting position
Usually the operation is lightning fast, so the user should not notice it.

What is the most efficient way to modify DOM elements and limit reflow?

When working with a very dynamic UI (think Single Page App) with potentially large JS libraries, view templates, validation, ajax, animations, etc... what are some strategies that will help minimize or reduce the time the browser spends on reflow?
For example, we know there are many ways to accomplish a DIV size change but are there techniques that should be avoided (from a reflow standpoint) and how do the results differ between browsers?
Here is a concrete example:
Given a simple example of 3 different ways to control the size of a DIV when the window is resized, which of these should be used to minimize reflows?
http://jsfiddle.net/xDaevax/v7ex7m6v/
//Method 1: Pure Javascript
function resize(width, height) {
var target = document.getElementById("method1");
target.setAttribute("style","width:" + width + "px");
target.setAttribute("style", "height:" + height + "px");
console.log("here");
} // end function
window.onresize = function() {
var height = (window.innerHeight / 4);
var width = (window.innerWidth / 4);
console.log(height);
resize(height, width);
}
//Method #3 Jquery animate
$(function() {
$(window).on("resize", function(e, data) {
$("#method3").animate({height: window.innerHeight / 4, width: window.innerWidth / 4}, 600)
});
});
It's best to try and avoid changing DOM elements whenever possible. At times you can prevent reflow at all by sticking to CSS properties or, if required, using CSS' transforms so that the element itself is not affected at all but, instead the visual state is just changed. Paul Lewis and Paul Irish go into detail about why this is the case in this article.
This approach will not work in all cases because sometimes it's required to change the actual DOM element, but for many animations and such, transforms brings the best performance.
If your operations do require reflow, you can minimize the effect it has by:
Keeping the DOM depth small
Keeping your CSS selector simple (and saving complicated ones to a variable in JavaScript)
Avoiding inline styles
Avoiding tables for layout
Avoiding JavaScript whenever possible
Nicole Sullivan posted a pretty good article on the subject that goes into more details of browser reflows and repaints.
If you're actually changing the DOM, not DOM properties, it's best to do it in larger chunks rather than smaller ones like this Stack Overflow post suggests.
In the example you provided, the second method is the best because it is using CSS properties without needing JavaScript. Browsers are pretty good at rendering elements who's dimensions and position is determined solely by CSS. However, it's not always possible to get the element where we need to be with pure CSS.
The worst method is by far the third because jQuery's animate is terribly slow to start out with, but making it fire on resize makes the animates stack on top of each other so it lags wayyy behind if you resize it much at all. You can prevent this by either setting a timeout with a boolean to check whether it's been fired already or, more preferably, don't use jQuery's animate to do this at all but instead use jQuery's .css() since the resize function fires so often that it will look animated anyway.

How to ensure CSS :hover is applied to dynamically added element

I have a script that adds full images dynamically over thumbnails when you hover over them. I've also given the full images a CSS :hover style to make them expand to a larger width (where normally they are constrained to the dimensions of the thumbnail). This works fine if the image loads quickly or is cached, but if the full image takes a long time to load and you don't move the mouse while it's loading, then once it does appear it will usually stay at the thumbnail width (the non-:hover style) until you move the mouse again. I get this behavior in all browsers that I've tried it in. I'm wondering if this is a bug, and if there's a way to fix or work around it.
It may be worth noting that I've also tried to do the same thing in Javascript with .on('mouseenter'), and encountered the same problem.
Due to the nature of the issue, it can be hard to reproduce, especially if you have a fast connection. I chose a largish photo from Wikipedia to demonstrate, but to make it work you might have to change it to something especially large or from a slow domain. Also note that you may have to clear the cache for successive retries.
If you still can't reproduce, you can add an artificial delay to the fullimage.load before the call to anchor.show().
HTML:
<img id="image" src="http://upload.wikimedia.org/wikipedia/commons/thumb/3/32/Cairo_International_Stadium.jpg/220px-Cairo_International_Stadium.jpg" />
CSS:
.kiyuras-image {
position: absolute;
top: 8px;
left: 8px;
max-width: 220px;
}
.kiyuras-image:hover {
max-width: 400px;
}
JS:
$(function () {
var fullimageurl = 'http://upload.wikimedia.org/wikipedia/commons/3/32/Cairo_International_Stadium.jpg';
var fullimage = $('<img/>')
.addClass('kiyuras-image')
.load(function () {
anchor.show();
});
var anchor = $('<a/>').hide().append(fullimage);
$('body').prepend(anchor);
$("#image").on('mouseenter', function () {
fullimage.attr('src',fullimageurl);
$(this).off('mouseenter');
});
});
JS Bin
Updated JS Bin with 1.5-second delay added (Hopefully makes issue clearer)
Again: Reproducing the issue involves clearing your cache of the large image, and then hovering over the original image to initial the loading of large image, then not moving your mouse while it's loading. Intended behavior is for the large image to properly take on the :hover pseudo-class when it eventually loads. Issue I see when it takes longer than ~0.75 secs to load is that it does not take on :hover until you jiggle the mouse a little.
Edit: See my comments on #LucaFagioli's answer for further details of my use case.
Edit, the sequel: I thought I already did this, but I just tried to reproduce the issue in Firefox and I couldn't. Perhaps this is a Chrome bug?
Most browsers update their hover states only when the cursor moves over an element by at least one pixel. When the cursor enters the thumbnail's img it gets hover applied and runs your mouseenter handler. If you keep your cursor still until the full-sized image loads, your old img (the thumbnail) will keep the hover state and the new one won't get it.
To get it working in these browsers, move the hover pseudo-class to a common parent element in the CSS; for example, enclose both imgs in a span.
If the selectors are correct, CSS will be applied to all elements, dynamic or otherwise. This includes all pseudo classes, and will change as attributes in the DOM change.
[Edit: while my explanation might be of interest, pozs' solution above is nicer, so I suggest using that if you can.]
The hover pseudo-class specification is quite relaxed concerning when it should be activated:
CSS does not define which elements may be in the above states,
or how the states are entered and left. Scripting may change
whether elements react to user events or not, and different
devices and UAs may have different ways of pointing to, or
activating elements.
In particular, it is not being activated when you update the visibility of the anchor element on load.
You can get around this fairly easily: copy the hover styles to a class, intercept the cursor moving over the element that it will eventually cover, and based on that add or remove your class from the element.
Demo: JS Bin (based on your delayed example).
Javascript:
$("#image")
.on('mouseenter', function () {
fullimage.attr('src',fullimageurl).toggleClass('mouseover', true);
$(this).off('mouseenter');
})
.mouseleave(function() {
fullimage.toggleClass('mouseover', false);
});
CSS:
.kiyuras-image:hover, .kiyuras-image.mouseover {
max-width: 400px;
}
TL;DR: You cannot rely on :hover applying to dynamically added elements underneath the cursor. However, there are workarounds available in both pure CSS and Javascript.
I'm upvoting both Jordan Gray and posz' answers, and I wish I could award them both the bounty. Jordan Gray addressed the issue re: the CSS specification in a somewhat conclusive way and offered (another) working fix that still allowed for :hover and other CSS effects like transitions, except on load. posz provided a solution that works even better and avoids Javascript for any of the hover events; I provide essentially the same solution here, but with a div instead of a span. I decided to award it to him, but I think Jordan's input was essential. I'm adding and accepting my own answer because I felt the need to elaborate more on all of this myself. (Edit: Changed, I accepted posz')
Jordan referenced the CSS2 spec; I will refer instead to CSS3. As far as I can tell, they don't differ on this point.
The pseudo-class in question is :hover, which refers to elements that the user has "designated with a pointing device." The exact definition of the behavior is deliberately left vague to allow for different kinds of interaction and media, which unfortunately means that the spec does not address questions like: "Should a new element that appears under the pointing device have this pseudo-class applied?" This is a hard question to answer. Which answer will align with user intent in a majority of cases? A dynamic change to a page the user is interacting with would normally be a result of ongoing user interaction or preparation for the same. Therefore, I would say yes, and most current browsers seem to agree. Normally, when you add an element under the cursor, :hover is immediately applied. You can see this here: The jsbin I originally posted. Note that if there's a delay in loading the larger image, you may have to refresh the page to get it to work, for reasons I'll go into.
Now, there's a similar case where the user activates the browser itself with the cursor held stationary over an element with a :hover rule; should it apply in that case? The mouse "hover" in this case was not a result of direct user interaction. But the pointing device is designating it, right? Besides, any movement of the mouse will certainly result in an unambiguous interaction. This is a harder question to answer, and browsers answer it in different ways. When you're activating them, Chrome and Firefox do not change :hover state until you move the mouse (Even if you activated them with a click!). Internet Explorer, on the other hand, updates :hover state as soon as it's activated. In fact, it updates it even when it's not active, as long as it's the first visible window under the mouse. You can see this yourself using the jsbin linked above.
Let's return to the first case, though, because that's where my current issue arises. In my case, the user hasn't moved the mouse for a significant length of time (over a second), and an element is added directly underneath the cursor. This could more easily be argued to be a case where user interaction is ambiguous, and where the pseudo-class should not be toggled. Personally, I think that it should still be applied. However, most browsers do not seem to agree with me. When you hover over the image for the first time and then do not move your mouse in this jsbin (Which is the one I posted in my question to demonstrate the issue, and, like the first one, has a straightforward :hover selector), the :hover class is not applied in current Chrome, Opera, and IE. (Safari also doesn't apply it, but interestingly, it does if you go on to press a key on the keyboard.) In Firefox, however, the :hover class is applied immediately. Since Chrome and Firefox were the only two I initially tested with, I thought this was a bug in Chrome. However, the spec is more or less completely silent on this point. Most implementations say nay; Firefox and I say aye.
Here are the relevant sections of the spec:
The :hover pseudo-class applies while the user designates an element with a pointing device, but does not necessarily activate it. For example, a visual user agent could apply this pseudo-class when the cursor (mouse pointer) hovers over a box generated by the element. User agents not that do not support interactive media do not have to support this pseudo-class. Some conforming user agents that support interactive media may not be able to support this pseudo-class (e.g., a pen device that does not detect hovering).
[...]
Selectors doesn't define if the parent of an element that is ‘:active’ or ‘:hover’ is also in that state.
[...]
Note: If the ‘:hover’ state applies to an element because its child is designated by a pointing device, then it's possible for ‘:hover’ to apply to an element that is not underneath the pointing device.
So! On to the workarounds! As several have zealously pointed out in this thread, Javascript and jQuery provide solutions for this as well, relying on the 'mouseover' and 'mouseenter' DOM events. I explored quite a few of those solutions myself, both before and after asking this question. However, these have their own issues, they have slightly different behavior, and they usually involve simply toggling a CSS class anyway. Besides, why use Javascript if it's not necessary?
I was interested in finding a solution that used :hover and nothing else, and this is it (jsbin). Instead of putting the :hover on the element being added, we instead put it on an existing element that contains that new element, and that takes up the same physical space; in this case, a div containing both the thumbnail and the new larger image (which, when not hovered, will be the same size as the div and thumbnail). This would seem to be fairly specific to my use case, but it could probably be accomplished in general using a positioned div with the same size as the new element.
Adding: After I finished composing this answer, pozs provided basically the same solution as above!
A compromise between this and one of the full-Javascript solutions is to have a one-time-use class that will effectively rely on Javascript/DOM hover events while adding the new element, and then remove all that and rely on :hover going forward. This is the solution Jordan Gray offered (Jsbin)
Both of these work in all the browsers I tried: Chrome, Firefox, Opera, Safari, and Internet Explorer.
From this part of your question: "This works fine if the image loads quickly or is cached, but if the full image takes a long time to load and you don't move the mouse while it's loading,"
Could it be worth while to "preload" all of the images first with JavaScript. This may allow all of the images to load successfully first, and it may be a little more user friendly for people with slower connections.
You could do something like that : http://jsfiddle.net/jR5Ba/5/
In summary, append a loading layout in front of your image, then append a div containing your large image with a .load() callback to remove your loading layer.
The fiddle above has not been simplified and cleaned up due to lack of time, but I can continue to work on it tomorrow if needed.
$imageContainer = $("#image-container");
$image = $('#image');
$imageContainer.on({
mouseenter: function (event) {
//Add a loading class
$imageContainer.addClass('loading');
$image.css('opacity',0.5);
//Insert div (for styling) containing large image
$(this).append('<div><img class="hidden large-image-container" id="'+this.id+'-large" src="'+fullimageurl+'" /></div>');
//Append large image load callback
$('#'+this.id+'-large').load(function() {
$imageContainer.removeClass('loading');
$image.css('opacity',1);
$(this).slideDown('slow');
//alert ("The image has loaded!");
});
},
mouseleave: function (event) {
//Remove loading class
$imageContainer.removeClass('loading');
//Remove div with large image
$('#'+this.id+'-large').remove();
$image.css('opacity',1);
}
});
EDIT
Here is a new version of the fiddle including the right size loading layer with an animation when the large picture is displayed : http://jsfiddle.net/jR5Ba/6/
Hope it will help
Don't let the IMG tag get added to the DOM until it has an image to download. That way the Load event won't fire until the image has been loaded. Here is the amended JS:
$(function () {
var fullimageurl = 'http://upload.wikimedia.org/wikipedia/commons/3/32/Cairo_International_Stadium.jpg';
var fullimage = $('<img/>')
.addClass('kiyuras-image')
.load(function () {
anchor.show(); // Only happens after IMG src has loaded
});
var anchor = $('<a/>').hide();
$('body').prepend(anchor);
$("#image").on('mouseenter', function () {
fullimage.attr('src',fullimageurl); // IMG has source
$(this).off('mouseenter');
anchor.append(fullimage); // Append IMG to DOM now.
});
});
I did that and it worked on Chrome (version 22.0.1229.94 m):
I changed the css as that:
.kiyuras-image{
position: absolute;
top: 8px;
left: 8px;
max-width: 400px;
}
.not-hovered{
max-width: 220px;
}
and the script this way:
$(function(){
var fullimageurl = 'http://upload.wikimedia.org/wikipedia/commons/3/32/Cairo_International_Stadium.jpg';
var fullimage = $('<img/>')
.addClass('kiyuras-image')
.load(function () {
anchor.show();
});
var anchor = $('<a/>').hide().append(fullimage);
$('body').prepend(anchor);
$('.kiyuras-image').on('mouseout',function(){
$(this).addClass('not-hovered');
});
$('.kiyuras-image').on('mouseover',function(){
$(this).removeClass('not-hovered');
});
$("#image").one('mouseover', function(){
fullimage.attr('src',fullimageurl);
});
});
Basically I think it's a Chrome bug in detecting/rendering the 'hover' status; in fact when I tried to simply change the css as:
.kiyuras-image{
position: absolute;
top: 8px;
left: 8px;
max-width: 400px;
}
.kiyuras-image:not(:hover) {
position: absolute;
top: 8px;
left: 8px;
max-width: 220px;
}
it still didn't worked.
PS: sorry for my english.
I'm not 100% sure why the :hover declaration is only triggered on slight mouse move. A possible reason could be that technically you may not really hover the element. Basically you're shoving the element under the cursor while it is loading (until the large image is completely loaded the A element has display: none and can therefore impossible be in the :hover state). At the same time, that doesn't explain the difference with smaller images though...
So, a workaround is to just use JavaScript and leave the :hover statement out of the equation. Just show the user the two different IMG elements depending on the hover state (toggles in JavaScript). As an extra advantage, the image doesn't have to be scaled up and down dynamically by the browser (visual glitch in Chrome).
See http://jsbin.com/ifitep/34/
UPDATE: By using JavaScript to add an .active class on the large image, it's entirely possible to keep using native CSS animations. See http://jsbin.com/ifitep/48

Finding the first word that browsers will classify as overflow

I'm looking to build a page that has no scrolling, and will recognize where the main div's contents overflow. The code will remember that point and construct a separate page that starts at that word or element.
I've spent a few hours fiddling, and here's the approaches that past questions employ:
1. Clone the div, incrementally strip words out until the clone's height/width becomes less than the original's.
Too slow. I suppose I could speed it up by exponentially stripping words and then slowly filling it back up--running past the target then backtracking slowly till I hit it exactly--but the approach itself seems kind of brute force.
2. Do the math on the div's dimensions, calculate out how many ems will fit horizontally and vertically.
Would be good if all contents were uniform text, ala a book, but I'm expecting to deal with headlines and images and whatnot, which throws a monkey wrench in this one. Also complicated by browsers' different default font preferences (100%? 144%?)
3. Render items as tokens, stop when the element in question (i.e. one character) is no longer visible to the user onscreen.
This would be my preferred approach, since it'd just involve some sort of isVisible() check on rendered elements. I don't know if it's consistent with how browsers opt to render, though.
Any recommendations on how this might get done? Or are browsers designed to render the whole page length before deciding whether a scrollbar is needed?
Instead of cloning the div, you could just have an overflow:hidden div and set div.scrollTop += div.height each time you need to advance a 'page'. (Even though the browser will show no scrollbar, you can still programmatically cause the div to scroll.)
This way, you let the browser handle what it's designed to do (flow of content).
Here's a snippet that will automatically advance through the pages: (demo)
var div = $('#pages'), h = div.height(), len = div[0].scrollHeight, p = $('#p');
setInterval(function() {
var top = div[0].scrollTop += h;
if (top >= len) top = div[0].scrollTop = 0;
p.text(Math.floor(top/h)+1 + '/' + Math.ceil(len/h)); // Show 'page' number
}, 1000);
You could also do some fiddling to make sure that a 'page' does not start in the middle of a block-level element if you don't want (for example) headlines sliced in half. Unfortunately, it will be much harder (perhaps impossible) to ensure that a line of text isn't sliced in half.

Categories

Resources