I'm trying to detect the position of the browser's scrollbar with JavaScript to decide where in the page the current view is.
My guess is that I have to detect where the thumb on the track is, and then the height of the thumb as a percentage of the total height of the track. Am I over-complicating it, or does JavaScript offer an easier solution than that? What would some code look like?
You can use element.scrollTop and element.scrollLeft to get the vertical and horizontal offset, respectively, that has been scrolled. element can be document.body if you care about the whole page. You can compare it to element.offsetHeight and element.offsetWidth (again, element may be the body) if you need percentages.
I did this for a <div> on Chrome.
element.scrollTop - is the pixels hidden in top due to the scroll. With no scroll its value is 0.
element.scrollHeight - is the pixels of the whole div.
element.clientHeight - is the pixels that you see in your browser.
var a = element.scrollTop;
will be the position.
var b = element.scrollHeight - element.clientHeight;
will be the maximum value for scrollTop.
var c = a / b;
will be the percent of scroll [from 0 to 1].
document.getScroll = function() {
if (window.pageYOffset != undefined) {
return [pageXOffset, pageYOffset];
} else {
var sx, sy, d = document,
r = d.documentElement,
b = d.body;
sx = r.scrollLeft || b.scrollLeft || 0;
sy = r.scrollTop || b.scrollTop || 0;
return [sx, sy];
}
}
returns an array with two integers- [scrollLeft, scrollTop]
It's like this :)
window.addEventListener("scroll", (event) => {
let scroll = this.scrollY;
console.log(scroll)
});
Answer for 2018:
The best way to do things like that is to use the Intersection Observer API.
The Intersection Observer API provides a way to asynchronously observe
changes in the intersection of a target element with an ancestor
element or with a top-level document's viewport.
Historically, detecting visibility of an element, or the relative
visibility of two elements in relation to each other, has been a
difficult task for which solutions have been unreliable and prone to
causing the browser and the sites the user is accessing to become
sluggish. Unfortunately, as the web has matured, the need for this
kind of information has grown. Intersection information is needed for
many reasons, such as:
Lazy-loading of images or other content as a page is scrolled.
Implementing "infinite scrolling" web sites, where more and more content is loaded and rendered as you scroll, so that the user doesn't
have to flip through pages.
Reporting of visibility of advertisements in order to calculate ad revenues.
Deciding whether or not to perform tasks or animation processes based on whether or not the user will see the result.
Implementing intersection detection in the past involved event
handlers and loops calling methods like
Element.getBoundingClientRect() to build up the needed information for
every element affected. Since all this code runs on the main thread,
even one of these can cause performance problems. When a site is
loaded with these tests, things can get downright ugly.
See the following code example:
var options = {
root: document.querySelector('#scrollArea'),
rootMargin: '0px',
threshold: 1.0
}
var observer = new IntersectionObserver(callback, options);
var target = document.querySelector('#listItem');
observer.observe(target);
Most modern browsers support the IntersectionObserver, but you should use the polyfill for backward-compatibility.
If you care for the whole page, you can use this:
document.body.getBoundingClientRect().top
Snippets
The read-only scrollY property of the Window interface returns the
number of pixels that the document is currently scrolled vertically.
window.addEventListener('scroll', function(){console.log(this.scrollY)})
html{height:5000px}
Shorter version using anonymous arrow function (ES6) and avoiding the use of this
window.addEventListener('scroll', () => console.log(scrollY))
html{height:5000px}
Here is the other way to get the scroll position:
const getScrollPosition = (el = window) => ({
x: el.pageXOffset !== undefined ? el.pageXOffset : el.scrollLeft,
y: el.pageYOffset !== undefined ? el.pageYOffset : el.scrollTop
});
If you are using jQuery there is a perfect function for you: .scrollTop()
doc here -> http://api.jquery.com/scrollTop/
note: you can use this function to retrieve OR set the position.
see also: http://api.jquery.com/?s=scroll
I think the following function can help to have scroll coordinate values:
const getScrollCoordinate = (el = window) => ({
x: el.pageXOffset || el.scrollLeft,
y: el.pageYOffset || el.scrollTop,
});
I got this idea from this answer with a little change.
Related
Short version of the question
Is it possible to find only elements on a page that have a background-image or background: url set (including in stylesheets) without looping through every element on the page and using getComputedStyle(el);.
If not is it possible to optimise the elements I look through to reduce JS execution time?
Longer version of the question
As part of this related question I am trying to find a solution to gathering the size of all elements above the fold that may impact the "visually complete" state of the page.
The related question covers checking all CSS etc. is loaded so I am left with images (including background images) to check.
I am looking to make the following functions as performant as possible (as I may have to call it multiple times if I am unable to solve the main problem in the other question).
The main function is getRects(). I have included the checkRectangle function for completeness but the main concern is the way I am gathering candidates for the checkRectangle function (having to loop through every element on the page).
var doc = window.document;
var browserWidth = window.innerWidth || doc.documentElement.clientWidth;
var browserHeight = window.innerHeight || doc.documentElement.clientHeight;
function checkRectangle(el){
var rtrn = false;
if (el.getBoundingClientRect) {
var rect = el.getBoundingClientRect();
//check if the bottom is above the top to ensure the element has height, same for width.
//Then the last 4 checks are to see if the element is in the above the fold viewport.
if (rect.bottom <= rect.top || rect.right <= rect.left || rect.right < 0 || rect.left > browserWidth || rect.bottom < 0 || rect.top > browserHeight) {
rtrn = false;
}else{
rtrn = {};
rtrn.bot = rect.bottom;
rtrn.top = rect.top;
rtrn.left = rect.left;
rtrn.right = rect.right;
}
}
return rtrn;
}
//function to get the rectangles above the fold (I do other things to check fonts are loaded etc. so images are the only thing left to check)
function getRects(){
var rects = [];
var elements = doc.getElementsByTagName('*');
var re = /url\(.*(http.*)\)/ig;
for (var i = 0; i < elements.length; i++) {
var el = elements[i];
var style = getComputedStyle(el);
if(el.tagName == "IMG"){
var rect = checkRectangle(el);
if(rect){
//The URL is stored here for later processing where I match performance timings to the element, it is not relevant other than to show why I convert the `getBoundingClientRect()` to a simple object.
rect.url = el.src;
rects.push(rect);
}
}
//I also need to check for background images set in either CSS or with inline styles.
if (style['background-image']) {
var rect = checkRectangle(el);
if(rect){
var matches = re.exec(style['background-image']);
if (matches && matches.length > 1){
rect.url = matches[1].replace('"', '');
rects.push(rect);
}
}
}
}
Concerns / things that I can't work out
I see no way of not looping through all elements on the page and using getComputedStyle(el) to check if they have a background-image set. If I can reduce the candidates sufficiently that would solve my problems.
At the moment (due to having to call the function multiple times) I am not doing a check for background: url but that needs adding in as an efficient way as possible.
Is there a way of discarding some elements on the page that I can guarantee are not "above the fold" that wouldn't carry a massive performance penalty (bearing in mind anything could be position: fixed at the top of the page?).
Things I know I can do
If I can find a better way of checking for background and background-image then I know images become easier as I can use querySelectorAll and limit that list.
Additional information / thoughts
I am already tracking every network request using PerformanceObserver.
Is there perhaps a way I could look at every request, grab the file name if it is an image and then use the filename to work out where that image is displayed on the page, even if it is a background-image or background: url set in external CSS?
Alternative way of phrasing the question.
How could I possibly limit a list of elements that can make a network call for an image and how can I then check if they are above the fold as efficiently as possible?
I know its a bit to ask, but is the following possible without using jQuery? I have it running with jQuery now but it seems to be presenting performance issues. If you could help I will be most grateful. I am not lazy, just not very code knowledgable. Took me a while to even get this far.
//
// default speed ist the lowest valid scroll speed.
//
var default_speed = 1;
//
// speed increments defines the increase/decrease of the acceleration
// between current scroll speed and data-scroll-speed
//
var speed_increment = 0.01;
//
// maximum scroll speed of the elements
//
var data_scroll_speed_a = 2; // #sloganenglish
var data_scroll_speed_b = 5; // #image-ul
//
//
//
var increase_speed, decrease_speed, target_speed, current_speed, speed_increments;
$(document).ready(function() {
$(window).on('load resize scroll', function() {
var WindowScrollTop = $(this).scrollTop(),
Div_one_top = $('#image-ul').offset().top,
Div_one_height = $('#image-ul').outerHeight(true),
Window_height = $(this).outerHeight(true);
if (WindowScrollTop + Window_height >= (Div_one_top + Div_one_height)) {
$('#sloganenglish').attr('data-scroll-speed', data_scroll_speed_a).attr('data-current-scroll-speed', default_speed).attr('data-speed-increments', data_scroll_speed_a * speed_increment);
$('#image-ul').attr('data-scroll-speed', data_scroll_speed_b).attr('data-current-scroll-speed', default_speed).attr('data-speed-increments', data_scroll_speed_b * speed_increment);
increase_speed = true;
decrease_speed = false;
} else {
$('#sloganenglish').attr('data-scroll-speed', '1').attr('data-current-scroll-speed', default_speed);
$('#image-ul').attr('data-scroll-speed', '1').attr('data-current-scroll-speed', default_speed);
decrease_speed = true;
increase_speed = false;
}
}).scroll();
});
I don't see any performance issue in your code, although there is space for some optimization. And I don't think jQuery might be the problem.
First thing to notice is the CSS access.
The height attribute is very expensive to access because it causes the browser to process many rendering steps of the pipeline, as you can see in CSS Triggers.
You are retrieving the height of two elements in a scroll event, which means that they will be calculated many times. Is it really necessary?
If your #image-ul element doesn't change its height, maybe you can calculate it outside of the event only once.
In the case of the window height, I believe it won't change in the scroll event. How about to create different handlers, one for the events that need to (re)calculate the window height and another for the events that don't need that calculation?
Another noticeable point is that you set the 'data-current-scroll-speed' and the 'data-speed-increments' attribute always with the same constant value. No change, no unset. Is it really necessary?
Actually, it is not clear what you are really doing. Your performance issue might be somewhere else.
For a fixed header I add/remove an active class to the anchors like this:
// Store basic variables:
var win = $(window),
sec = $('section'),
nav = $('nav'),
anc = $('nav a'),
pos = nav.offset().top, // Distance of navigation to top of page
arr = sec.map(function(){return $(this).offset().top}).get(); // Distance of each section to top of page in an array
// Make function to add/remove classes:
win.scroll(function(){
var t = win.scrollTop(); // Distance of window top to top of page
t > pos ? nav.addClass('sticky') : nav.removeClass('sticky'), // Compare viewport top with top of navigation
// Compare each section position:
$.each(arr, function(i, val) {
(t >= Math.floor(val) && t < (val + sec.eq(i).outerHeight(true))) ? anc.eq(i-1).addClass('active')
: anc.eq(i-1).removeClass('active')
})
})
On some sections however at the very beginning of the section (i.e. after clicking the anchor and not scrolling further) the active class of the previous section (which is not in viewport anymore) won't get removed. Most probably due to calculations returning significant digits?
How can I get the calculations right so only the current section in viewport gets its anchor highlighted?
While I found some very weird behaviour during debugging this, it's as simple as substracting one pixel from the height of the section:
t >= val && t < (val + sec.eq(i).outerHeight(true) -1) ? ...
The rounding now happens directly inside var arr: return Math.floor($(this).offset().top)
What's really weird here is that even though I rounded everything down in the end the less than condition got returned true even though it mathematically wasn't...
For example this would return true (which apparently isn't):
1824 >= 912 && 1824 < (912 + 912)
So I had to substract 1px to make this true after all.
To me it seems as if jQuery's .outerHeight() prints no significant digits, but does include them. As in the documentation it says:
"The numbers returned by dimensions-related APIs, including .outerHeight(), may be fractional in some cases. Code should not assume it is an integer."
Which gets weird when rounding down doesn't work on it. In my fiddle I put the height of the sections to 912.453125px and .outerHeight() returned 912 but rounding it down with Math.floor() still seemed to return the fractions although it wouldn't print. (See this fiddle, where, when you go to section two and press the debug button, the calculation would be the above example)
So yeah, whatever. I'd like to have a more logical solution but substracting a pixel works.
How can I efficiently find all of the DOM elements that are on top of a specified query element?
That is, I want a Javascript function that when I pass in a reference to a DOM element will return an array of all DOM elements that have non-zero overlap with the input element and appear above it visually. My specific goal is to find those elements that may be visually blocking elements below them.
The context is one in which I do not have advanced knowledge of the web page, the query element, or much of anything else. Elements can appear above others for a variety of reasons.
I can of course do this through an exhaustive search of the DOM, but that's very inefficient and not practical when the DOM tree grows large. I could also use the newer elementFromPoint to sample positions from within the query element to ensure that it is indeed on top, but that seems pretty inefficient.
Any ideas on how to do this better?
Thanks!
I cannot think of a simpler way than using elementFromPoint. You don't seem to want to use it but can give you some consistent result.
If there are multi layered elements, you should adapt your code to move already grabbed elements or set them invisible and recall function to get new set of data elements.
For the basic idea:
function upperElements(el) {
var top = el.offsetTop,
left = el.offsetLeft,
width = el.offsetWidth,
height = el.offsetHeight,
elemTL = document.elementFromPoint(left, top),
elemTR = document.elementFromPoint(left + width - 1, top),
elemBL = document.elementFromPoint(left, top + height - 1),
elemBR = document.elementFromPoint(left + width - 1, top + height - 1),
elemCENTER = document.elementFromPoint(parseInt(left + (width / 2)), parseInt(top + (height / 2))),
elemsUpper = [];
if (elemTL != el) elemsUpper.push(elemTL);
if (elemTR != el && $.inArray(elemTR, elemsUpper) === -1) elemsUpper.push(elemTR);
if (elemBL != el && $.inArray(elemBL, elemsUpper) === -1) elemsUpper.push(elemBL);
if (elemBR != el && $.inArray(elemBR, elemsUpper) === -1) elemsUpper.push(elemBR);
if (elemCENTER != el && $.inArray(elemCENTER, elemsUpper) === -1) elemsUpper.push(elemCENTER);
return elemsUpper;
}
jsFiddle
It's unfortunate but there is no way to have a solution that will not iterate through all DOM element, because you can put any element anywhere on screen through CSS rules.
The best you can do it actually iterating over all the DOM elements to make a hit test.
If I had to do this, I would rely on jQuery, which is a widely used cross-browser API under constant improvement.
Take a look at http://api.jquery.com/position/ , http://api.jquery.com/width/ and http://api.jquery.com/height/
If performance is very important, you can gain a factor by diving into their implementation and improving it for your specific case, but keep in mind that the complexity will not go below O(number of DOM elements)
This is for a demo... and i was just curious, can you detect if the window has been moved? Like if you move Firefox/Chrome/IE around your monitor? I doubt it, but I wanted to see since you can check for resize and focus/blurred windows.
I can only think of this (heavy) work-around, where you check if window.screenX and window.screenY have changed every x milliseconds
var oldX = window.screenX,
oldY = window.screenY;
var interval = setInterval(function(){
if(oldX != window.screenX || oldY != window.screenY){
console.log('moved!');
} else {
console.log('not moved!');
}
oldX = window.screenX;
oldY = window.screenY;
}, 500);
Though I would not recommend this -- it might be slow and I'm not sure if screenX and screenY are supported by all browsers
A potentially more optimised version of this is to only check for window movement when outside of the window combined with Harmen's answer:
var interval;
window.addEventListener("mouseout", function(evt){
if (evt.toElement === null && evt.relatedTarget === null) {
//if outside the window...
if (console) console.log("out");
interval = setInterval(function () {
//do something with evt.screenX/evt.screenY
}, 250);
} else {
//if inside the window...
if (console) console.log("in");
clearInterval(interval);
}
});
If using jQuery, it may normalise screenX/Y in this case so it's worth running a few tests on that. Jquery would use this format instead of addEventListener:
$(window).on('mouseout', function () {});
If you are moving the window in Windows OS via alt + Space, and find that windows resizes are ignored, I would recommend adding an extra level of detection via keypress events.
Re the first answer: I use the 'poll window position' in production code. It's a very lightweight thing to do. Asking for a couple of object properties twice a second is not going to slow anything down. Cross-browser window position is given by:
function get_window_x_pos()
{
var winx;
if(window.screenX)
winx=window.screenX;
else if(window.screenLeft)
winx=window.screenLeft;
return winx;
}
and similarly for vertical position. In my code I use this to fire an AJAX event off to the server to store position and size of the window so next time it will open where it was the last time (I'm probably moving to HTML5 local storage soon.) One little wrinkle you might want to cover is not generating spurious updates while the window is being dragged. The way to handle this is to register when the window has been moved for the first time and only trigger an update when two subsequent polls of window position return the same value. A further complication is for windows which allow resizing from all sides. If the left or top side are dragged, the DOM will give you a resize event, but the nominal window position will have altered as well.
Unfortunately not. The DOM is only notified about window sizes, cursor positions, "focus" and "blur", etc; anything that affects drawing. Since moving a window doesn't necessarily require any of the contents to be "redrawn" (in a Javascript/Html engine sort of sense), the DOM, therefore, doesn't need to know about it.
Sadly, no. Although I did find this page that claims there is such a thing. I tested that in IE, Chrome, and FireFox, no luck.