How can I efficiently find all of the DOM elements that are on top of a specified query element?
That is, I want a Javascript function that when I pass in a reference to a DOM element will return an array of all DOM elements that have non-zero overlap with the input element and appear above it visually. My specific goal is to find those elements that may be visually blocking elements below them.
The context is one in which I do not have advanced knowledge of the web page, the query element, or much of anything else. Elements can appear above others for a variety of reasons.
I can of course do this through an exhaustive search of the DOM, but that's very inefficient and not practical when the DOM tree grows large. I could also use the newer elementFromPoint to sample positions from within the query element to ensure that it is indeed on top, but that seems pretty inefficient.
Any ideas on how to do this better?
Thanks!
I cannot think of a simpler way than using elementFromPoint. You don't seem to want to use it but can give you some consistent result.
If there are multi layered elements, you should adapt your code to move already grabbed elements or set them invisible and recall function to get new set of data elements.
For the basic idea:
function upperElements(el) {
var top = el.offsetTop,
left = el.offsetLeft,
width = el.offsetWidth,
height = el.offsetHeight,
elemTL = document.elementFromPoint(left, top),
elemTR = document.elementFromPoint(left + width - 1, top),
elemBL = document.elementFromPoint(left, top + height - 1),
elemBR = document.elementFromPoint(left + width - 1, top + height - 1),
elemCENTER = document.elementFromPoint(parseInt(left + (width / 2)), parseInt(top + (height / 2))),
elemsUpper = [];
if (elemTL != el) elemsUpper.push(elemTL);
if (elemTR != el && $.inArray(elemTR, elemsUpper) === -1) elemsUpper.push(elemTR);
if (elemBL != el && $.inArray(elemBL, elemsUpper) === -1) elemsUpper.push(elemBL);
if (elemBR != el && $.inArray(elemBR, elemsUpper) === -1) elemsUpper.push(elemBR);
if (elemCENTER != el && $.inArray(elemCENTER, elemsUpper) === -1) elemsUpper.push(elemCENTER);
return elemsUpper;
}
jsFiddle
It's unfortunate but there is no way to have a solution that will not iterate through all DOM element, because you can put any element anywhere on screen through CSS rules.
The best you can do it actually iterating over all the DOM elements to make a hit test.
If I had to do this, I would rely on jQuery, which is a widely used cross-browser API under constant improvement.
Take a look at http://api.jquery.com/position/ , http://api.jquery.com/width/ and http://api.jquery.com/height/
If performance is very important, you can gain a factor by diving into their implementation and improving it for your specific case, but keep in mind that the complexity will not go below O(number of DOM elements)
Related
Short version of the question
Is it possible to find only elements on a page that have a background-image or background: url set (including in stylesheets) without looping through every element on the page and using getComputedStyle(el);.
If not is it possible to optimise the elements I look through to reduce JS execution time?
Longer version of the question
As part of this related question I am trying to find a solution to gathering the size of all elements above the fold that may impact the "visually complete" state of the page.
The related question covers checking all CSS etc. is loaded so I am left with images (including background images) to check.
I am looking to make the following functions as performant as possible (as I may have to call it multiple times if I am unable to solve the main problem in the other question).
The main function is getRects(). I have included the checkRectangle function for completeness but the main concern is the way I am gathering candidates for the checkRectangle function (having to loop through every element on the page).
var doc = window.document;
var browserWidth = window.innerWidth || doc.documentElement.clientWidth;
var browserHeight = window.innerHeight || doc.documentElement.clientHeight;
function checkRectangle(el){
var rtrn = false;
if (el.getBoundingClientRect) {
var rect = el.getBoundingClientRect();
//check if the bottom is above the top to ensure the element has height, same for width.
//Then the last 4 checks are to see if the element is in the above the fold viewport.
if (rect.bottom <= rect.top || rect.right <= rect.left || rect.right < 0 || rect.left > browserWidth || rect.bottom < 0 || rect.top > browserHeight) {
rtrn = false;
}else{
rtrn = {};
rtrn.bot = rect.bottom;
rtrn.top = rect.top;
rtrn.left = rect.left;
rtrn.right = rect.right;
}
}
return rtrn;
}
//function to get the rectangles above the fold (I do other things to check fonts are loaded etc. so images are the only thing left to check)
function getRects(){
var rects = [];
var elements = doc.getElementsByTagName('*');
var re = /url\(.*(http.*)\)/ig;
for (var i = 0; i < elements.length; i++) {
var el = elements[i];
var style = getComputedStyle(el);
if(el.tagName == "IMG"){
var rect = checkRectangle(el);
if(rect){
//The URL is stored here for later processing where I match performance timings to the element, it is not relevant other than to show why I convert the `getBoundingClientRect()` to a simple object.
rect.url = el.src;
rects.push(rect);
}
}
//I also need to check for background images set in either CSS or with inline styles.
if (style['background-image']) {
var rect = checkRectangle(el);
if(rect){
var matches = re.exec(style['background-image']);
if (matches && matches.length > 1){
rect.url = matches[1].replace('"', '');
rects.push(rect);
}
}
}
}
Concerns / things that I can't work out
I see no way of not looping through all elements on the page and using getComputedStyle(el) to check if they have a background-image set. If I can reduce the candidates sufficiently that would solve my problems.
At the moment (due to having to call the function multiple times) I am not doing a check for background: url but that needs adding in as an efficient way as possible.
Is there a way of discarding some elements on the page that I can guarantee are not "above the fold" that wouldn't carry a massive performance penalty (bearing in mind anything could be position: fixed at the top of the page?).
Things I know I can do
If I can find a better way of checking for background and background-image then I know images become easier as I can use querySelectorAll and limit that list.
Additional information / thoughts
I am already tracking every network request using PerformanceObserver.
Is there perhaps a way I could look at every request, grab the file name if it is an image and then use the filename to work out where that image is displayed on the page, even if it is a background-image or background: url set in external CSS?
Alternative way of phrasing the question.
How could I possibly limit a list of elements that can make a network call for an image and how can I then check if they are above the fold as efficiently as possible?
In order to determine the page margin in various positions on the page I need to get the offset, width and height of each DOM element.
I loop over the DOM elements recursively and save each element's attributes.
I tried it with JQuery $(el).offset() but it was very slow. I guess that creating a JQuery object for each element by using the$(...) is very slow.
Then, I tried an old implementation that I got from an old code which uses native JS and it was X4 times faster.
SO what I'm really asking, is there a different method to accomplish that?
I should mention that my script runs on publishers' sites and I don't really know anything about the page that I'm running on.
I guess this is one solution, using map() and getBoundingClientRect():
var elements = document.getElementsByClassName('someclass');
elements.map(function(val, i, arr) {
var doc = document.getBoundingClientRect(),
el = val.getBoundingClientRect(),
coordsX = el.left - doc.left,
coordsY = el.top - doc.top;
return {x: coordsX, y: coordsY};
});
This will return an array of all the coordinates of a given class. You can change document.getElementsByClassName to querySelector if it better suits your needs.
This will also change getBoundingClientRect() to be relative to the document instead of the viewport.
I have the following block of code that I need to complete as quickly as possible:
//someSpans is an array of spans, with each span containing two child spans inside of it
$.each( someSpans, function (i, span) {
//Get the span widths, then add them to the style to make them permanent
var aSpan = span.children[0];
var bSpan = span.children[1];
span.style.width = (aSpan.offsetWidth + bSpan.offsetWidth) + 'px';
aSpan.style.width = aSpan.offsetWidth + 'px';
bSpan.style.width = bSpan.offsetWidth + 'px';
});
If someSpans is an array that contains 1000 objects, this loop presented above will cause 3000 browser redraws, even though nothing on screen is actually changing, since the new "width" attributes in the style match the existing "auto" width. Is there a way to prevent the browser from redrawing the CSS until the loop is finished? I feel like this will greatly reduce the time it takes for the loop to complete.
I feel like requestAnimationFrame might be the key to doing what I'm looking for, but maybe I'm off base.
While the comments of why make a great point, here's a little better answer.
Part of your problem here is the alternating reads/writes from the style. Namely, setting span.style.width has now made aSpan.offsetWidth "dirty" and the CSS must be rendered. However, consider this:
var aWidth = aSpan.offsetWidth;
var bWidth = bSpan.offsetWidth;
span.style.width = (aWidth + bWidth) + 'px';
aSpan.style.width = aWidth + 'px';
bSpan.style.width = bWidth + 'px';
The rendering is now cut down to once per loop. More specifically, it's in reading offsetWidth on the next iteration that causes the render.
Exercise: While it can make code a little more obtuse, sometimes unnecessarily so, I have sometimes written code like this to loop twice. The first time collects the operations into an array, and the second loop is able to combine all the "setting" operations without accessing any layout values.
MSDN has some great documents on JavaScript performance with the most applicable here being "Managing layout efficiently"
I have build a grid of div's as playground for some visual experiments. In order to use that grid, i need to know the x and y coordinates of each div. That's why i want to create a table with the X and Y position of each div.
X:0 & Y:0 = div:eq(0), X:0 Y:1 = div:eq(1), X:0 Y:2 = div:eq(2), X:0 Y:3 = div:eq(3), X:1 Y:0 = div:eq(4) etc..
What is the best way to do a table like that? Creating a OBJECT like this:
{
00: 0,
01: 1,
02: 2,
etc..
}
or is it better to create a array?
position[0][0] = 0
the thing is i need to use the table in multiple way's.. for example the user clicked the div nb: 13 what are the coordinates of this div or what is the eq of the div x: 12 y: 5.
Thats how i do it right now:
var row = 0
var col = 0
var eq = 0
c.find('div').each(function(i){ // c = $('div#stage')
if (i !=0 && $(this).offset().top != $(this).prev().offset().top){
row++
col = 0
}
$(this).attr({'row': row, 'col': col })
col++
})
I think it would be faster to build a table with the coordinates, instead of adding them as attr or data to the DOM. but i cant figure out how to do this technically.
How would you solve this problem width JS / jQuery?
A few questions:
Will the grid stay the same size or will it grow / shrink?
Will the divs stay in the same position or will they move around?
Will the divs be reused or will they be dynamically added / removed?
If everything is static (fixed grid size, fixed div positions, no dynamic divs), I suggest building two indices to map divs to coordinates and coordinates to divs, something like (give each div an id according to its position, e.g. "x0y0", "x0y1"):
var gridwidth = 20, gridheight = 10,
cells = [], // coordinates -> div
pos = {}, // div -> coordinates
id, i, j; // temp variables
for (i = 0; i < gridwidth; i++) {
cells[i] = [];
for (j = 0; j < gridheight; j++) {
id = 'x' + i + 'y' + j;
cells[i][j] = $('#' + id);
pos[id] = { x: i, y: j };
}
}
Given a set of coordinates (x, y) you can get the corresponding div with:
cells[x][y] // jQuery object of the div at (x, y)
and given a div you can get its coordinates with:
pos[div.attr('id')] // an object with x and y properties
Unless you have very stringent performance requirements, simply using the "row" and "col" attributes will work just fine (although setting them through .data() will be faster). To find the div with the right row/col, just do a c.find("div[row=5][col=12]"). You don't really need the lookup.
Let me elaborate on that a little bit.
If you were to build a lookup table that would allow you to get the row/col for a given div node, you would have to specify that node somehow. Using direct node references is a very bad practice that usually leads to memory leaks, so you'd have to use a node Id or some attribute as a key. That is basically what jQuery.data() does - it uses a custom attribute on the DOM node as a key into its internal lookup table. No sense in copying that code really. If you go the jQuery.data() route, you can use one of the plugins that allows you to use that data as part of the selector query. One example I found is http://plugins.jquery.com/project/dataSelector.
Now that I know what it's for...
It might not seem efficient at first, but I think It would be the best to do something like this:
Generate the divs once (server side), give them ids like this: id="X_Y" (X and Y are obviously numbers), give them positions with CSS and never ever move them. (changing position takes a lot of time compared to eg. background change, and You would have to remake the array I describe below)
on dom ready just create a 2D array and store jquery objests pointing the divs there so that
gridfields[0][12] is a jQuery object like $('#0_12'). You make the array once and never use selectors any more, so it's fast. Moreover - select all those divs in a container and do .each() on them and put them to proper array fields splitting their id attributes.
To move elements You just swap their css attributes (or classes if You can - it's faster) or simply set them if You have data that has the information.
Another superfast thing (had that put to practice in my project some time ago) is that You just bind click event to the main container and check coordinates by spliting $(e.target).attr('id')
If You bind click to a grid 100x100 - a browser will probably die. Been there, did that ;)
It may not be intuitive (not changing the div's position, but swapping contents etc.), but from my experience it's the fastest it can get. (most stuff is done on dom ready)
Hope You use it ;) Good luck.
I'm not 100% sure that I understand what you want, but I'd suggest to avoid using a library such as jQuery if you are concerned about performance. While jQuery has become faster recently, it still does has more overhead than "pure" JS/DOM operations.
Secondly - depending on which browsers you want to support - it may even be better to consider using a canvas or SVG scripting.
I'm trying to detect the position of the browser's scrollbar with JavaScript to decide where in the page the current view is.
My guess is that I have to detect where the thumb on the track is, and then the height of the thumb as a percentage of the total height of the track. Am I over-complicating it, or does JavaScript offer an easier solution than that? What would some code look like?
You can use element.scrollTop and element.scrollLeft to get the vertical and horizontal offset, respectively, that has been scrolled. element can be document.body if you care about the whole page. You can compare it to element.offsetHeight and element.offsetWidth (again, element may be the body) if you need percentages.
I did this for a <div> on Chrome.
element.scrollTop - is the pixels hidden in top due to the scroll. With no scroll its value is 0.
element.scrollHeight - is the pixels of the whole div.
element.clientHeight - is the pixels that you see in your browser.
var a = element.scrollTop;
will be the position.
var b = element.scrollHeight - element.clientHeight;
will be the maximum value for scrollTop.
var c = a / b;
will be the percent of scroll [from 0 to 1].
document.getScroll = function() {
if (window.pageYOffset != undefined) {
return [pageXOffset, pageYOffset];
} else {
var sx, sy, d = document,
r = d.documentElement,
b = d.body;
sx = r.scrollLeft || b.scrollLeft || 0;
sy = r.scrollTop || b.scrollTop || 0;
return [sx, sy];
}
}
returns an array with two integers- [scrollLeft, scrollTop]
It's like this :)
window.addEventListener("scroll", (event) => {
let scroll = this.scrollY;
console.log(scroll)
});
Answer for 2018:
The best way to do things like that is to use the Intersection Observer API.
The Intersection Observer API provides a way to asynchronously observe
changes in the intersection of a target element with an ancestor
element or with a top-level document's viewport.
Historically, detecting visibility of an element, or the relative
visibility of two elements in relation to each other, has been a
difficult task for which solutions have been unreliable and prone to
causing the browser and the sites the user is accessing to become
sluggish. Unfortunately, as the web has matured, the need for this
kind of information has grown. Intersection information is needed for
many reasons, such as:
Lazy-loading of images or other content as a page is scrolled.
Implementing "infinite scrolling" web sites, where more and more content is loaded and rendered as you scroll, so that the user doesn't
have to flip through pages.
Reporting of visibility of advertisements in order to calculate ad revenues.
Deciding whether or not to perform tasks or animation processes based on whether or not the user will see the result.
Implementing intersection detection in the past involved event
handlers and loops calling methods like
Element.getBoundingClientRect() to build up the needed information for
every element affected. Since all this code runs on the main thread,
even one of these can cause performance problems. When a site is
loaded with these tests, things can get downright ugly.
See the following code example:
var options = {
root: document.querySelector('#scrollArea'),
rootMargin: '0px',
threshold: 1.0
}
var observer = new IntersectionObserver(callback, options);
var target = document.querySelector('#listItem');
observer.observe(target);
Most modern browsers support the IntersectionObserver, but you should use the polyfill for backward-compatibility.
If you care for the whole page, you can use this:
document.body.getBoundingClientRect().top
Snippets
The read-only scrollY property of the Window interface returns the
number of pixels that the document is currently scrolled vertically.
window.addEventListener('scroll', function(){console.log(this.scrollY)})
html{height:5000px}
Shorter version using anonymous arrow function (ES6) and avoiding the use of this
window.addEventListener('scroll', () => console.log(scrollY))
html{height:5000px}
Here is the other way to get the scroll position:
const getScrollPosition = (el = window) => ({
x: el.pageXOffset !== undefined ? el.pageXOffset : el.scrollLeft,
y: el.pageYOffset !== undefined ? el.pageYOffset : el.scrollTop
});
If you are using jQuery there is a perfect function for you: .scrollTop()
doc here -> http://api.jquery.com/scrollTop/
note: you can use this function to retrieve OR set the position.
see also: http://api.jquery.com/?s=scroll
I think the following function can help to have scroll coordinate values:
const getScrollCoordinate = (el = window) => ({
x: el.pageXOffset || el.scrollLeft,
y: el.pageYOffset || el.scrollTop,
});
I got this idea from this answer with a little change.