I'm implementing a front-end application using Stencil.js. We intend to nest our Stencil application on every page of our web application, which all use different technologies. We have some pages that are Angular apps, others are made with React, etc.
Within the Stencil application, I want to modify the "body" element on the background web pages so that I can disable the scroll bar at various times. I can easily make a call to "document.body", but I don't know if this is something that will always be available depending on the type of web page. Does every web page contain a DOM, as well as a "body" element, regardless of what technology was used?
In case I need to clarify this, I'm talking about visual web pages loaded in web browsers.
Generally speaking yes, but there are exceptions.
document.body is a getter and it looks up the body element every time you refer to it in code. If your code executes before body element created or even if there is one, you will get null. The common case where you might not have document.body ready yet is when you have a synchronous script tag in head element or if your application is a browser extension set up to execute before the page is loaded.
Related
I have got a website, that is lazyloading react scripts from different sources. For each script loaded, we provide a div with the name of the script as id. As soon as the script is loaded, it searches for the div with the id and renders the components.
As the site is displayed on a stationary tablet it does not reload very often and the memory footprint gets pretty big. Is there a way to completely unload a react script without reloading the website? Is there even a way to just unload any
kind of script? I guess the garbage collector is responsible for this, but currently its not even removing scripts / components that have unmounted a long time ago.
As I was searching for a solution, I found this thread about angular. I'm basicly looking for a way to do the same with react (Even tho I didn't test the angular solution).
Before removing the script tag and the container DOM node, you can use unmountComponentAtNode to allow React to do its cleanup.
ReactDOM.unmountComponentAtNode(document.getElementById('root'));
Use a design pattern that uses conditional rendering and check in the componentDidMount if either data is returned or the specific section is to be rendered.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am trying to understand how the DOM is rendered, and resources and requested/loaded from the network. However when reading the resources found on internet, DOM parsing/loading/rendering/ready terms are used and I cant seem to grasp what is the order of these 'events'.
When script, css or img file is requested from network, does it stop rendering dom only or stops parsing it also? Is Dom loading same as Dom rendering? and Is DomContentLoaded event equivalent to jQuery.ready()?
Can someone please explain if some of these terms are synonymous and in what order they happen?
When you open a browser window, that window needs to have a document loaded into it for the user to see and interact with. But, a user can navigate away from that document (while still keeping the same window open) and load up another document. Of course, the user can close the browser window as well. As such, you can say that the window and the document have a life-cycle.
The window and the document are accessible to you via object APIs and you can get involved in the life-cycles of these objects by hooking up functions that should be called during key events in the life-cycle of these objects.
The window object is at the top of the browser's object model - it is always present (you can't have a document if there's no window to load it into) and this means that it is the browser's Global Object. You can talk to it anytime in any JavaScript code.
When you make a request for a document (that would be an HTTP or HTTPS request) and the resource is returned to the client, it comes back in an HTTP or HTTPS response - this is where the data payload (the text, html, css, JavaScript, JSON, XML, etc.) lives.
Let's say that you've requested an .html page. As the browser begins to receive that payload it begins to read the HTML and construct an "in-memory" representation of the document object formed from the code. This representation is called The Document Object Model or the DOM.
The act of reading/processing the HTML is called "parsing" and when the browser is done doing this, the DOM structure is now complete. This key moment in the life-cycle triggers the document object's DOMContentLoaded event, which signifies that there is enough information for a fully formed document to be interactive. This event is synonymous with jQuery's document.ready event.
But, before going on, we need to back up a moment... As the browser is parsing the HTML, it also "renders" that content to the screen, meaning that space in the document is allocated for the element and its content and that content is displayed. This doesn't happen AFTER all parsing is complete, the rendering engine works at the same time the parsing engine is working, just one step behind it - - if the parsing engine parses a table-row, for example, the rendering engine will then render it. However, when it comes to things like images, although the image element may have been parsed, the actual image file may not yet have finished downloading to the client. This is why you may sometimes initially see a page with no images and then as the images begin to appear, the rest of the content on the page has to shift to make room for the image -- the browser knew there was going to be an image, but it didn't necessarily know how much space it was going to need for that image until it arrived.
CSS files, JS files, images and other resources required by the document download in the background, but most browsers/operating systems cap how many HTTP requests can be working simultaneously. I know for Windows, the Windows registry has a setting for IE that caps that at 10 requests at at time, so if a page has 11 images in it, the first 10 will download at the same time, but the 11th will have to wait. This is one of the reasons it is suggested that it's better to combine multiple CSS files into one file and to use image sprites, rather than separate images - - to reduce the overall amount of HTTP requests a page has to make.
When all of the external resources required by the document have completed downloading (CSS files, JavaScript files, image files, etc.), the window will receive its "load" event, which signifies that, not only has the DOM structure been built, but all resources are available for use. This is the event to tap into when your code needs to interact with the content of an external resource - - it must wait for the content to arrive before consuming it.
Now that the document is fully loaded in the window, anything can happen. The user may click things, press keys to provide input, scroll, etc. All these actions cause events to trigger and any or all of them can be tapped into to launch custom code at just the right time.
When the browser window is asked to load a different document, there are events that are triggered that signify the end of the document's life, such as the window's beforeunload event and ultimately its unload event.
This is all still a simplification of the total process, but I think it should give you a good overview of how documents are loaded, parsed and rendered within their life-cycle.
I'm making a game using JavaScript, currently I'm using window.location = "somepage.html" to perform navigation but I'm not sure if that is the correct way to do it. As I said in the title I've choosed Blank App Template so I do not have any navigator.js or something like.
Can you guys tell me the best way to do it?
Although you can use window.location to perform navigation, I'm sure you've already noticed a few of the downsides:
The transition between pages goes through a black screen, which is an artifact of how the underlying HTML rendering engine works.
You lose your script context between pages, e.g. you don't have any shared variables or namespaces, unless you use HTML5 session storage (or WinRT app data).
It's hard to wire up back buttons, e.g. you have to make sure each destination page knows what page navigated to it, and then maintain a back stack in session storage.
It's for these reasons that WinJS + navigator.js created a way to do "pages" via DOM replacement, which is the same strategy used by "single page web apps." That is, you have a div in default.html within which you load an unload DOM fragments to give the appearance of page navigation, while you don't actually ever leave the original script context of default.html. As a result, all of your in-memory variables persist across all page navigations.
The mechanics work like this: WinJS.Navigation provides an API to manage navigation and a backstack. By itself, however, all it really does is manage a backstack array and fire navigation-related events. To do the DOM replacement, something has to be listening to those events.
Those listeners are what navigator.js implements, so that's a piece of code that you can pull into any project for this purpose. Navigator.js also implements a custom control called the PageControlNavigator (usually Application.PageControlNavigator) is what implements the listeners.
That leave the mechanics of how you define your "pages." This is what the WinJS.UI.Pages API is for, and navigator.js assumes that you've defined your pages in this way. (Technically speaking, you can define your own page mechanisms for this, perhaps using the low-level WinJS.UI.Fragments API or even implementing your own from scratch. But WinJS.UI.Pages came about because everyone who approached this problem basically came up with the same solution, so the WinJS team provided one implementation that everyone can use.)
Put together then:
You define each page as an instance of WinJS.UI.Pages.PageControl, where each page is identified by its HTML file (which can load its own JS and CSS files). The JS file contains implementations of a page's methods like ready, in which you can do initialization work. You can then build out any other object structure you want.
In default.html, define a single div for the "host container" for the page rendering. This is an instance of the PageControlNavigator class that's defined in navigator.js. In its data-win-options you specify "{home: }" for the initial page that's loaded.
Whenever you want to switch to another page, call WinJS.Navigation.navigate with the identifier for the target page (namely the path to its .html file). In response, it will fire some navigating events.
In response, the PageControlNavigator's handlers for those events will load the target page's HTML into the DOM, within its div in default.html. It will then unload the previous page's DOM. When all of this gets rendered, you see a page transition--and a smooth one because we can animate the content in and out rather than go through a black screen.
In this process, the previous page control's unload method is called, and the init/load/processed/ready methods of the new page control are called.
It's not too hard to convert a blank app template into a nav template project--move your default.html/.css/.js content into a page control structure, add navigator.js to default.html (and your project), and put a PageControlNavigator into default.html. I suggest that you create a project from the nav app template for reference. Given the explanation above, you should be able to understand the structure.
For more details, refer to Chapter 3 of my free ebook, Programming Windows Store Apps with HTML, CSS, and JavaScript, Second Edition, where I talk about app anatomy and page navigation with plenty of code examples.
The way DerbyJS (http://derbyjs.com) seems to work at the moment is that it replaces everything in the body tag of the document whenever you click a link.
Is there anyway to say use the template, but replace the content inside #main-content with that instead of the whole body?
Navigation on the left is fixed and doesn't need the benefits of realtime interaction.
Why this is an issue is needing to run some Javascript on the page load to set the size of some containers based on the size of a users browser window, and once I click a link, this setup gets wiped and recreated, and of course, the Javascript doesn't run again, because the document itself hasn't refreshed, just the body.
This would also allow me to write nicer jQuery bindings for the most part, $('element').click(, rather than $('html').on('click','element', ...
Any thoughts, or is this a step too far for this framework at this point in time?
P.S. As I'm only just getting started with Derby, and realtime frameworks in general, maybe what I'm trying to do isn't best practice anyway? I chose Derby because I like the UX part of initial render on the server, then the rest in the client, but sharing routers, which reduces the duplication of code. Open to any better ways of achieving this.
There is no way to rerender part of body on page reload. Just whole body.
You can use app.enter hook to run js code after every page render.
No need to use jQuery bindings, use Derby bindings
I fully agree with Vladimir's answer, just trying to add something to it.
It should be possible to re-render part of the UI through transitional routes (http://derbyjs.com/#routes). In your case it seems like app.enter is the way to go though.
I write a Mozilla Firefox Addon, that lets me comment websites: When I open a website and click somewhere, the plugin creates a <div> box at this location, where I can enter a comment text. Later, when I open the website again, the plugin automatically puts my previously created comment boxes at the places they where before. (Similar to a comment feature in many PDF readers, etc.)
This leads to a security problem: A website could use an event listener to listen to the creation of new <div> elements and read their content, allowing it to read my private comments.
How can I solve this security issue? Basically, I want a Firefox addon to put private content in a website, while the website should not be able to access this content via JavaScript. (Unless I want it to.)
I could listen to listeners and detach them as soon as the website attaches them - but that does sound like a solid solution.
Is there a security concept in order to make my addon the authority over DOM changes, respectively, to control the access to certain elements?
Alternatively, would it be possible to implement some sort of overlay, which would not be an actual part of the websites DOM but only accessible by the addon?
Similar security problems should occur with other addons. How do they solve it?
If you inject the DOM in a document, the document will always be able to manipulate it, you can't really do much about it. You can either:
1) Don't inject your comment directly in the document, but just a placeholder were there is the first words of the comment, or an image version of the comment (you can generate that with canvas), leave the full ones in your JavaScript Add-on scope, that is not accessible from the page: when you click to edit or add, you can open a panel instead and do the editing there.
2) Inject an iframe, if you have your page remotely in another domain shouldn't be a problem at all, the parent's document can't access to the iframe; but also viceversa: you need to attach content script to your iframe in order to talk with your add-on code, and then you can use your add-on code to send and receive messages from both iframe and parent's document.
If you use a local resource:// document, I'm afraid you need a terrible workaround instead, and you need to use sandbox policies to avoid that the parent's document can communicate with the iframe itself. See my reply here: Firefox Addon SDK: Loading addon file into iframe
3) Use CSS: you can apply a CSS to a document via contentStyle and contentStyleFile in page-mods. The CSS attached in this way can't be inspected by the document itself, and you could use content to add your text to the page, without actually adding DOM that can be inspected. So, your style for instance could be:
span#comment-12::after{
content: 'Hello World';
}
Where the DOM you add could be:
<div><span id='comment-12'></span></div>
If the page tries to inspect the content of the span, it will get an empty text node; and because from the page itself the stylesheet added in this way cannot be inspected, they cannot the styles rules to get the text.
Not sure if there are alternatives, those are the solutions that pop to my mind.
Add-ons that do similar things implement some combination of a whitelist / blacklist feature where the add-on user either specifies which sites they want the action to happen on, or a range of sites they don't want it to happen on. As an add-on author, you would create this and perhaps provide a sensible default configuration. Adblock Plus does something similar.
Create an iframe and bind all your events to the new DOM. By giving it a different domain to the website, you will prevent them from listening in to events and changes.
Addons can use use the anonymous content API used by the devtools to create its node highlighter overlays.
Although the operations supported on anonymous content are fairly limited, so it may or may not be sufficient for your use-case.