Since I've tried out webworkers for the first time I'm struggling to find a real use case for them. Communication with them isn't as easy as just passing objects or references, they don't have a window object so I can't use JQuery and the increased complexity of building an interface is not worth the load saved on the main thread. So all the options I'm left with are basically working through large arrays to gain a performance advantage.
Today I thought about opening a new window by window.open() and use the newly created window object to do some task and pass the result back to the main window. I should even be able to access the DOM of the main window by accessing the window.openervariable in the new window.
My questions are:
Is that really going to give me a performance advantage?
Are there any caveats about this idea besides more complicated debugging?
Can I access the DOM of the main window from the new window using the window.opener variable and take the load of creating new DOM elements from the main thread?
Is that really going to give me a performance advantage?
No, as long as you can access window.opener or access objects from one tab to another, that means they share same instance of javascript interpreter, and javascript interpreter is single-threaded.
There is literally no (practical) way around this. You either have separate thread, or share same objects.
Are there any caveats about this idea besides more complicated debugging?
Main caveat: it does not work. Also caveat: separate window is probably not something suitable for production.
Can I access the DOM of the main window from the new window using the window.opener variable and take the load of creating new DOM elements from the main thread?
You can, but you should probably use correct document instance for calling document.createElement.
So all the options I'm left with are basically working through large arrays to gain a performance advantage.
That's exactly what workers are for, unless you are processing large amount of raw data, they are probably not a solution to your problem.
If you have performance drops when creating DOM nodes, you're most likely doing something wrong. Remember that:
createDocumentFragment exists for creating self contained element groups before appending them.
innerHTML causes DOM to rebalance tree in which you cahnged the HTML
new Text or document.createTextNode can be used to fill in text without innerHTML
If your page is scrollable table of many items, only those on screen need to be rendered.
You should also profile your code using developper tools to see where is the performance bottleneck. WebWorker is for data processing (eg. resizing images before upload), not for DOM manipulation.
Final note: it's the end of 2019, "I can't use jQuery" shouldn't be a problem any more. We now have document.querySelector and CSS animations, which were main uses of jQuery in the past.
Related
I'm trying to figure out if altering the DOM of a website will present any accessibility problems. I am placing all of the jQuery in a $(document).ready() function. Will this cause any accessibility issues with the altered elements?
We don't have access to the theme template HTML files, only CSS and JS files. So for example I'm adding a div into our theme using$('[element name]').before('<div>[div content here]</div>') will this content be as accessible as the rest of the DOM as long as I include all the appropriate aria attributes etc.?
In theory, you shouldn't rely on JavaScript to produce the whole HTML code of your site, it's basically a bad practice.
However, it's exactly how big frameworks like angular and react work.
Given that 99% of browsers support JavaScript, it's in fact no longer a problem nowadays.
The true answer is in fact both yes and no, it depends. It depends on the actual HTML code injected.
The key point is that, you must have the same care with generated code as with the code directly written in HTML by hand, i.e. be careful on headings, form labels, alt texts, ARIA attributes if you need them, etc. all the time and in particular each time you add/remove something in the DOM. Additionally, you must pay attention to where the focus is or might be and a few other things.
It's often overlooked precisely because some people assume that it's anyway not accessible, what isn't true.
In order to be accessible, a site with dynamic contents must be accessible at any moment. If it isn't always the case, then you will lose users in need of accessibility at some point. In practice the loss of accessibility often happens at the most critical moment: checkout or paiement, maybe not because of your fault if the paiement site isn't accessible.
You might even improve accessibility by manipulating the DOM via JavaScript (JS). So no, per se, manipulating the DOM does not pose accessibility issues.
If you cannot control the HTML, and the theme is badly accessible, all you can do to improve that is using JavaScript. Think adding role attributes to generic <div> elements. Also, CSS-only solutions seem appealing, but are often not exposing the appropriate state via ARIA-attributes to assistive technology, which needs to be corrected via JS.
Whether your manipulations produce problems or improve accessibility, therefore depends strongly on your implementation.
Here are some examples.
Adding or Removing Content
When adding content, like in your example, it depends on where that content is added in the document, and at which moment.
If it’s being added on DOM Ready, there should be no issue (One exception might be live regions added after the DOM loaded). But if it’s being added at arbitrary moments, it’s problematic.
The Web Content Accessibility Guidelines (WCAG) refer to this as a Change of Context, which must not happen on focus, on input and on user request
See A change of content is not always a change of context. Changes in content, such as an expanding outline, dynamic menu, or a tab control do not necessarily change the context, unless they also change one of the above (e.g., focus).
If it’s being added after DOM Ready, it should happen on user request, or must be a status message and carry the appropriate role to be announced.
For example, in a Disclosure pattern, the aria-expanded of the trigger indicates that new content will become accessible right after the trigger on pressing it. It might just be added to the DOM, depending on the implementation.
Lazy Loading Content
Another, very valid use case would be content that’s added asynchronously. This is tricky to get right, but basically aria-busy can render this more accessible.
Goal
I'm making a Chrome extension to perform some manipulations on my university's website since the layout to select a course is bad. For this I need to access elements to read their inner information and also copy their CSS to add certain information that I will obtain from a different site, in a way that fits the style of the page.
Problem
When I open the source code on the exact page I want to use, it doesn't display the correct HTML. Instead it shows the main page's code under the dev tool. The interesting part is that when I highlight a certain element the code shows up and I'm able to make changes within the tool. But if I try to call a specific element under the console using $(id) or $$(id) it would show either null or [].
This causes some problems to because I'm new to any sort of web-related development and I would like to see the complete source so that I can select the elements I want and manipulate the page the way I would like. Maybe there is something I'm overlooking? that's why I need your help.
Possible reasons
I tried many things and try to research and concluded that it might have to do with frames since the url is not changing. However I'm not able to find any resources to teach me about frames (I know nothing about it) if that's the actual problem.
If the problem is another I would appreciate any assistance in solving it or any work around that I am not aware of.
The reason is definitely the use of frames. There are multiple documents at play here, the top level document and each frame has it's own document. This is important because the JavaScript you are executing is 99.9999% the top level document and not a child frame's document. Due to this, it's not finding the DOM nodes because it doesn't search the frames' documents.
I'm writing a JavaScript script that periodically checks for new elements within a page, that is, DOM tree updates. One of those specific elements contains an hyperlink to other page. My objective is to perform a GET of that page and convert the results to a DOM object in order to trigger a particular event of a particular element within that page. I could do this by var newPage = window.open(hyperlink); and then have access to the elements within the page through newPage.document.getElementById('elementId');. However, the script iterates over many hyperlinks and it is not efficient to open them all up.
So, is there any way to manipulate an object of an entire page efficiently, i.e., without opening it (e.g., $.get(hyperlink, function(page) { // convert page to DOM });)?
Appreciate any answers,
Thanks.
Perhaps you're taking the wrong approach. Rather than convert the page to DOM, you should simply do a regex search for a link. That would clearly be the most efficient way to make use of a page's contents. However, admittedly, it is also a pain to do properly and it doesn't take into consideration links added by javascript.
It depends entirely on what your scope is. If you tell me you're looking for an efficient way of accomplishing this, then I offer this solution. Otherwise, there's no "quick" way of parsing an entire page into DOM no matter which way you slice it.
This will get you started on a regular expression for extracting html links.
I am creating a webpage which needs to pull in a Javascript library which is indispensable but also breaks other code by modifying the prototypes of built-in classes. I have sandboxed it in an iframe and set up cooperative data sharing between the two.
When the iframe's body.onload fires, my code will modify the parent document. In theory this code will be correct regardless of whether the parent or child's body.onload fires first. In practice, I can't test one code path because the iframe always seems to load before the parent, even if I inject artificial delays in the web server.
Is the parent body guaranteed to always fire onload only after child documents have loaded, or is this just a quirk of Firefox?
If not, how can I, for the sake of testing, force the child to load later when sleep(5) in child.php doesn't give me this test case?
Finally, when the child Javascript modifies its Javascript environment, is this guaranteed to be separate from the parent frame, or is this just a quirk of Firefox that makes it work?
Portability matters. Thanks.
If portabilty matters for you, then convinience should matter too. Your whole concept sounds clunky and unreasonable by reading your post (dealing with iframes, sleep, etc.).
However, to answer you questions I'm not 100% sure about document.body.onload, but you can be sure that any handler bound to window.onload will fire after all iframes were loaded, that is very very cross-browser compatible. Also the fact that an iframe is a "sandboxed" environment with its own DOM is very cross-browser. So you don't need to mind about the parent document or other iframes and conflicts.
I am creating a progressively built single-page (if Javascript is enabled) "blog" which uses AJAX to request HTML for new pages the user is navigating to.
When the user navigates to new pages they will be added one after another into the DOM in a small window with "overflow: hidden;":
<div id="foo" style="width:200px; height:100px;">
<div id="bar" style="width:999999px">
</div>
</div>
When an AJAX call returns success a div will be appended into #bar.
How will it affect the browser when there are a lot of hidden pages outside the #foo width?
Do I need to remove the divs from the DOM as the user navigates away from them? Then I will need to make a new AJAX request if the user chooses to navigate to them again. :(
Thanks
Willem
No matter what people say GC will do for you, whether in JavaScript or C# or Java, watch out and forget the silly promise of automatic management. Clean it up explicitly and sleep well.
Very simple reason: closures leak and leak pretty bad the moment you move out of most simplistic scenarios (the case both for brower's JavaScript as well as C#/java).
Modern browser layout engines are generally smart enough to not process elements that are hidden, so it won't take much CPU power. However, adding large numbers of nodes with highly complex object graphs can be expensive in some browsers, so I'd be careful with this. Also note that even if they're not laid out, they're still there as part of the DOM, and memory usage could conceivably become a concern if these nodes are large.