Finding the first word that browsers will classify as overflow - javascript

I'm looking to build a page that has no scrolling, and will recognize where the main div's contents overflow. The code will remember that point and construct a separate page that starts at that word or element.
I've spent a few hours fiddling, and here's the approaches that past questions employ:
1. Clone the div, incrementally strip words out until the clone's height/width becomes less than the original's.
Too slow. I suppose I could speed it up by exponentially stripping words and then slowly filling it back up--running past the target then backtracking slowly till I hit it exactly--but the approach itself seems kind of brute force.
2. Do the math on the div's dimensions, calculate out how many ems will fit horizontally and vertically.
Would be good if all contents were uniform text, ala a book, but I'm expecting to deal with headlines and images and whatnot, which throws a monkey wrench in this one. Also complicated by browsers' different default font preferences (100%? 144%?)
3. Render items as tokens, stop when the element in question (i.e. one character) is no longer visible to the user onscreen.
This would be my preferred approach, since it'd just involve some sort of isVisible() check on rendered elements. I don't know if it's consistent with how browsers opt to render, though.
Any recommendations on how this might get done? Or are browsers designed to render the whole page length before deciding whether a scrollbar is needed?

Instead of cloning the div, you could just have an overflow:hidden div and set div.scrollTop += div.height each time you need to advance a 'page'. (Even though the browser will show no scrollbar, you can still programmatically cause the div to scroll.)
This way, you let the browser handle what it's designed to do (flow of content).
Here's a snippet that will automatically advance through the pages: (demo)
var div = $('#pages'), h = div.height(), len = div[0].scrollHeight, p = $('#p');
setInterval(function() {
var top = div[0].scrollTop += h;
if (top >= len) top = div[0].scrollTop = 0;
p.text(Math.floor(top/h)+1 + '/' + Math.ceil(len/h)); // Show 'page' number
}, 1000);
You could also do some fiddling to make sure that a 'page' does not start in the middle of a block-level element if you don't want (for example) headlines sliced in half. Unfortunately, it will be much harder (perhaps impossible) to ensure that a line of text isn't sliced in half.

Related

Does 'display:none' improve or worsen performance?

I have a page with a lot of vertical scrolling and thousands of DOM elements. For improving performance, I thought about setting display: none; to the content of the divs above and below the viewport, that is, the divs which are not visible (while keeping their heights, obviously):
In order to check if my idea makes any sense I searched SO and I found this question. According to the comments and to the accepted answer, the best strategy is do nothing, since display: none; triggers reflow and may have the opposite effect:
Setting display to none triggers reflow which is completely opposite of what you want if what you want is to avoid reflow. Not doing anything doesn't trigger reflow. Setting visibility to hidden will also not trigger reflow. However, not doing anything a much easier.
However, there is a recent answer (which unfortunately seems more like a comment or even a question) that claims that display: none; is the current strategy used by sites like Facebook, where the vertical scroll is almost infinite.
It's worth mentioning that, unlike OP's description in that question, each visible div in my site is interactive: the user can click, drag, and do other stuff with the div's contents (which, I believe, makes the browser repainting the page).
Given all these information, my question is: does display: none; applied to the divs above/below the viewport improve performance or does it worsen performance? Or maybe it has no effect?
The "display: none" property of an Element removes that element from the document flow.
Redefining that element display property from none to any other dynamically, and vice versa, will again force the change in document flow.
Each time requiring a recalculation of all elements under the stream cascade for new rendering.
So yes, a "display: none" property applied to a nonzero dimensional and free flow or relatively positioned element, will be a costly operation and therefore will worsen the performance!
This will not be the case for say position: absolute or otherwise, removed elements form the natural and free document flow who's display property may be set to none and back without triggering e re-flow on the body of the document.
Now in your specific case [see edited graph] as you move/scroll down bringing the 'display: block' back to the following div will not cause a re-flow to the rest of the upper part of the document. So it is safe to make them displayable as you go. Therefore will not impact the page performance. Also display: none of tail elements as you move up, as this will free more display memory. And therefore may improve the performance.
Which is never the case when adding or removing elements from and within the upper part of the HTML stream!
The answer is, like just about everything, it depends. I think you're going to have to benchmark it yourself to understand the specific situation. Here's how to run a "smoothness" benchmark since the perception of speed is likely more important than actual system performance for you.
As others have stated display:none leaves the DOM in memory. Typically the rendering is the expensive part but that's based on how many elements have to be rendered when things change. If the repaint operation is still having to check every element you may not see a huge performance increase. Here are some other options to consider.
Use Virtual DOM
This is why frameworks like React & Vue use a Virtual DOM. The goal is to take over the browser's job of deciding what to update and only making smaller changes.
Fully Add/Remove elements
You could replicate something similar by using Intersection Observer to figure out what's in/out of the viewport and actually add/subtract from the DOM instead of just relying on display:none alone since parsing javascript is generally more efficient than large paints.
Add GPU Acceleration
On the flip side, if the GPU is taking over rendering the paint might not be a performance suck but that's only on some devices. You can try it by adding transform: translate3d(0,0,0); to force GPU acceleration.
Give the Browser Hints
You may also see an improvement by utilizing CSS Will-Change attribute. One of the inputs is based on the content being outside the viewport. So will-change:scroll-position; on the elements.
CSS Content-Visibility (The bleeding edge)
The CSS Working group at W3C has the CSS containment module in draft form. The idea is to allow the developer to tell the browser what to paint and when. This includes paint & layout containment. content-visibility:auto is a super helpful property designed for just this type of problem. Here's more background.
Edit (April 2021) this is now available in Chrome 85+, Edge (Chromium) 85+, and Opera 71+. We're still waiting on Firefox support, but Can I Use puts it a 65% coverage.
It's worth a look as the demos I saw made a massive difference in performance and Lighthouse scores.
Two tips for improving performance when you have thousands of DOM elements and they needs to be scrolled through, interact etc.
Try to manage bindings provided by front end frameworks manually. Front end frameworks might need lot of additional processing for the simple data binding you need. They are good up to certain number of DOM elements. But if your case is special and exceeds the number of DOM elements in an average case, the way to go is to manually bind them considering the circumstance. This can certainly remove any lagging.
Buffer the DOM elements in and around the view port. If your DOM elements are a representation of the data in a table(s), fetch them with a limit and only render what you fetched. The user scrolling action should make the fetching and rendering going upwards or downwards.
Just hiding the elements is definitely not going to solve your performance problem coming by having thousands of DOM elements. Even though you can't see them, they occupy the DOM tree and the memory too. Only the browser doesn't have to paint them.
Here are some articles:
https://codeburst.io/taming-huge-collections-of-dom-nodes-bebafdba332
https://areknawo.com/dom-performance-case-study/
To add to the already posted answers.
The key takeaways from my testing:
setting an element to display: none; decreases ram usage
elements that are not displayed are not affected by layout shifts and therefore have no (or very little) performance cost in this regard
Firefox is way better at handling lots of elements (~x50)
Also try to minimize layout changes.
Here is my test setup:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<style>
body {
margin: 0;
}
.testEl {
width: 100%;
height: 10px;
}
</style>
</head>
<body>
<main id="main"></main>
</body>
<script>
// firefox Max
// const elementCount = 12200000;
// chrome Max
const elementCount = 231000;
const main = document.getElementById("main");
const onlyShowFirst1000 = true;
let _content = ""
for (let i = 0; i < elementCount; i++) {
_content += `<div class="testEl" style="background-color: hsl(${(Math.random() * 360)|0}, 100%, 50%); display: ${!onlyShowFirst1000 || i < 1000 ? "block" : "none"}"></div>`;
}
main.innerHTML = _content;
const addOneEnd = () => {
const newEl = document.createElement("div");
newEl.classList.add("testEl");
newEl.style.backgroundColor = `hsl(${(Math.random() * 360)|0}, 100%, 50%)`
requestAnimationFrame(() => {
main.appendChild(newEl);
})
};
const addOneBeginning = () => {
const newEl = document.createElement("div");
newEl.classList.add("testEl");
newEl.style.backgroundColor = `hsl(${(Math.random() * 360)|0}, 100%, 50%)`
requestAnimationFrame(() => {
main.insertBefore(newEl, main.firstChild);
})
};
const loop = (front = true) => {
front ? addOneBeginning() : addOneEnd();
setTimeout(() => loop(front), 100);
};
</script>
</html>
I create a lot of elements and have the option to only display the first 1000 of them using the onlyShowFirst1000 flag. when displaying all elements, firefox allowed up to ~12200000 elements (using 10gb of my RAM) and chrome up to ~231000.
Memory usage (at 231000 elements):
+----------+----------+-------------+
| false | true | reduction % |
+---------+----------+----------+-------------+
| Chrome | 415,764k | 243,096k | 42% |
+---------+----------+----------+-------------+
| Firefox | 169.9MB | 105.7MB | 38% |
+---------+----------+----------+-------------+
Changing the display property of an element to or from none causes the area to be repainted, but the area of your element will usually be relatively small, therefore the performance cost will also be small.
But depending on your layout, the display change might also cause a layout shift which could be quite costly since it would cause a big part of your page to repaint.
In the future (e.g. chrome 85) you will also be able to use the content-visibility property to tell the browser which elements dont have to be rendered.
Also, you set the browser to show repaints using the dev tools, for chrome open the rendering tab and check "Paint flashing".
The strategy of "virtual scrolling" is remove the HTML element when it's out of viewport, this improve the performance because reduce the number of the elements into the dom and reduce the time for repaint/reflow all the document.
Display none don't reduce the size of the dom, just make the element not visible, like visible hidden, without occupies the visual space.
Display none don't improve the performance because the goal of virtual scrolling is reduce the number of the elements into the dom.
Make an element display none or remove an element, trigger a reflow, but with display none you worst the performance because you don't have the benefit of reduce the dom.
About performance, display none is just like visible hidden.
Google Lighthouse flags as bad performance pages with DOM trees that:
Have more than 1,500 nodes total
Have a depth greater than 32 nodes
Have a parent node with more than 60 child nodes
In general, look for ways to create DOM nodes only when needed, and destroy nodes when they're no longer needed.
The only one benefit of display none is: will not cause a repaint or reflow when change.
Source:
https://web.dev/dom-size/
https://developers.google.com/speed/docs/insights/browser-reflow

How to parse visually coherent text in rendered HTML?

The assumption is that we have access to a rendered DOM via Javascript (such as the developer console when the page is loaded).
I want to extract text from a node in way similar as we humans interpret the content visually.
Example:
<div>
<span>This</span>
<span>Text</span>
<div>
<span>belongs together</span>
</div>
</div>
My algorithm should be able to recognize this text as one cluster, if it is rendered visually coherent.
So it should output: "This text belongs together" instead of ["this, "text", "belongs together"]
Any ideas how to proceed?
I thought about computing the boundingRect for each Text Node and applying some clusterization algorithm with the viewport dimensions as reference point.
Your idea of using bounding rectangles and relating them is a good one.
This file from Chrome, spatial_navigation.cc, might interest you. "Spatial navigation" is a feature in some browsers where the focus doesn't move in tab order but in up-down-left-right space. It is analogous to your problem because it works over the DOM but cares with how the links appear, not the structure of the DOM.
If you examine the primitives spatial navigation is built from, they are:
Bounding rectangles.
Intersecting the viewport.
Whether a rectangle is to the right or below another one.
Whether something is obscured.
From those primitives higher level things are built up.
Some more details on intersecting the viewport: The viewport is the area that's presenting content. You can use window.innerWidth and window.innerHeight for the viewport dimension in pixels and compute whether something is visible accumulating the layout and scroll offsets of it and its parents; or use Intersection Observers to find out whether an element is in the viewport.
Some more details on obscured nodes: In general, detecting obscured nodes is hard. display: none; is an easy case: those nodes will have innerWidth and innerHeight of 0. Overlapped content is harder: Detect how content collides and determine the z-index of what is on top. Hardest is near-transparent content,
low contrast content, and heavily filtered or transformed content.
If you encounter a lot of tricky cases like this it might be simpler to capture the screen and perform OCR on it. This takes advantage of the browser's rendering pipeline to do all of the transforms and layering; you can find text in images; etc. The downside is the getDisplayMedia API doesn't work in all browsers yet and it interrupts the user with a prompt.
You can still look to OCR algorithms for inspiration. OCR has to perform a similar problem: once localized characters have been recognized they have to be put into lines of text.
you can get your elements with getElementsByTagName or getElementsByClassName, this will return elements array and You need to use loop for every element. And in javascript use innerText prop to get text in the element.
var msg = "";
var els = document.getElementsByTagName("span");
for(i = 0; i < els.length; i++){
msg += els[i].innerText;
}
console.log(msg);

How can I compensate for longer load times when dynamically setting div dimensions with CSS and JS?

I am creating a Polymer app which displays information in rows in a custom element "table". I am using polymer flexbox classes to initially position elements. I need to take measurements with JS after the elements have loaded and assign widths to elements in the table accordingly. The problem goes something like this:
row 1
*--------------------*
| *-----------*|
| | el a ||
| *-----------*|
*--------------------*
row 2
*--------------------*
| *------*|
| | el b ||
| *------*|
*--------------------*
Row 1 and row 2 have fixed widths, but el a or bcould be of any width. Regardless of contents of el a or b, I want the el's contained in the rows to be all the same width as the widest. So in this case, a is widest so el b should adjust its width to match a.
I am using polymers attached method to ensure the elements are loaded before taking scrollWidth measurements via JS. I initially ran into the problem of getting undefined elements when I tried to grab them with querySelector, but I fixed this using Polymer's async. I can style the elements fine in a modern browser just fine. Consider the following code.
attached: {
this.async(function() {
console.log(Polymer.dom(this.root).querySelector('#a').scrollWidth);
console.log(Polymer.dom(this.root).querySelector('#b').scrollWidth);
}, 100);
}
A modern version of chrome will return something like 100px for a and 60px for b. If I test an older version of chrome or a different browser like firefox, a and b will be 5-15px less than what modern chrome measured. I found out that if I increase the async time enough, no matter the age of the browser, I will eventually get a measurement matching what modern chrome returned. This is to say while it appears the div exists on the page and I can grab it with querySelector, it seems to be fluctuating so that it is not at its full width yet.
I don't like guessing how long it will take for the elements to fully load before I can take measurements. I am trying to figure out a way I can be 100% confident that the elements are done loading or perhaps an alternate method of styling these elements I have overlooked. I have had the most success with using setTimeout on the function that is doing the measuring and waiting for it to measure the same thing several times in a row. But even that has been inconsistent at times and occasionally b will be slightly less wide than a when the web page appears.
One option when you need to keep checking things, even if it's been true a couple times, is to continue to use setTimeout infinitely until the values are no longer changing for a certian # of iterations. For example, if they are still the same after 10 timeouts, stop the timeouts from going all together.
something like...
var count = 0;
function check() {
do_something();
if (same_as_last_time == true) {count++}
if (count < 10) setTimeout (check,100);
}
setTimeout (check,100);
Although this could be wildly inefficient and possibly ineffective depending on how staggered your data is appearing.
I believe the actual solution here lies in the fact that you are using the variable ".scrollWidth" as your measurement point, which is a terrible measurement to standardize your widths across browsers. See this SO post for more info on that (https://stackoverflow.com/a/33672058/3479741)
I would recommend instead using a different method of acquiring the width and compare to see whether your results are more consistent. There are many options that remain more consistent than ".scrollWidth" in the article above.

HTML Element Divided in Half by Horizontal Line (at its waist)

Long-time lurker here. This is my first post (and I'm an electrical engineer, not a programmer).
I would like to have an element in HTML for which I can detect a click on its upper-half and its lower-half. Specifically, suppose I have a large numeric digit, and if you click above its "waist" it increments, and if you click below its waist it decrements. I then need to put several of these next to one another, like a "split-flap display" at a train station.
I already have all the javascript working for increment-only, but I want to make it easier to decrement instead of having to wrap all the way around with many clicks. I have so far avoided using jquery, so if you can think of an HTML-only way to do this I would love to hear about it.
I realize I will probably have to wrap two smaller containers (an upper and a lower one) into a larger container, but how do I get the text to cover the height of both internal containers? I probably want to avoid cutting a font in half and updating upper and lower separately.
Thanks,
Paul
This should work:
element.addEventListener('click',function(e){
//here's the bounding rect
var bound=element.getBoundingClientRect();
var height=bound.height;
var mid=bound.bottom-(height/2);
//if we're above the middle increment, below decrement
if(e.clientY<mid)
;//we're above the middle, so put some code here that increments
else
;//we're below the middle, so put some code here that decrements
},false);
element is the element that you wish to apply this effect on

Write a completely fluid HTML page (using '%' instead of 'px' for EVERY element height/width)

I am designing my HTML pages to be completely fluid:
For every element in the mark-up (HTML), I am using 'style="height:%;width:%"' (instead of 'style="height:*px;width:*px"').
This approach seems to work pretty well, except for when changing the window measurements, in which case, the web page elements change their position and end up "on top of each other".
I have come up with a pretty good run-time (java-script) solution to that:
var elements = document.getElementsByTagName("*");
for (var i=0; i < elements.length; i++)
{
if (elements[i].style.height) elements[i].style.height = elements[i].offsetHeight+"px";
if (elements[i].style.width ) elements[i].style.width = elements[i].offsetWidth +"px";
}
The only problem remaining is, that if the user opens up the website by entering the URL into a non-maximized window, then the page fits that portion of the window.
Then, when maximizing the window, the page remains in its previous measurements.
So in essence, I have solved the initial problem (when changing the window measurements), but only when the window is initially in its maximum size.
Any ideas on how to tackle this problem? (given that I would like to keep my "% page-design" as is).
I think the real answer is that "completely fluid design" isn't synonymous with "just use percentile measurements for everything". You will need to consider:
Exactly what each specific element should do when the window changes size
Some elements may need to appear/disappear when the screen is resized
Elements should likely have min- and max-widths specified
What happens on a small (e.g. 480x800 mobile) display?
What happens on a large (e.g. 2560x1600 monitor) display?
...amongst other things. There is no generic solution that you can just apply to every element to make fluid design work.

Categories

Resources