Should JS dynamically generate metadata/the whole page? - javascript

So I am going to have many pages that have a bunch of text in them, that a JS and CSS file will convert to a colored and everything webpage. I noticed that the text is usually going to be long, and since there are going to be many webpages, I should lower file size. Also since I don't want to ruin file quality, I have decided that my JS file is going to take the text and make a webpage out of it. Side Note: what I am trying to do is make tutorial pages, so I am going to use JS to generate a lot of the things that are on every tutorial page, like the lessons list, to lower file size.
I have noticed that metadata (<head> content) usually takes up some space that JS could generate, so I thought, Why don't I just generate this with JS? But then arose the problem that the some browsers might not parse it, or it might be slow to parse it. So I am asking here on Stack Overflow:
Should JavaScript generate metadata (and maybe almost the whole page, like remove the <head> tag completely and generate it with JS)?

It depends on your desired result.
Google has improved it's SEO mechanisms to render your page before indexing it, see here:
https://developers.google.com/search/docs/guides/javascript-seo-basics
However other bots may not do the same, such as social media crawlers like facebook or twitter that read Open Graph meta tags, or other search engines like Baidu.
If a bot doesn't render your document then the javascript doesn't get executed and your meta isn't present.
Additionally, if your initial document does not contain the stylesheets or other CDNs it takes a bit longer for the client. Imagine the process:
With head
fetch document
fetch resources
render content
Without head
fetch document
render content
fetch resources
re-render
That's way over-simplified but it demonstrates my point.
Alternative:
If your content is so dynamic, you might consider Server Side Rendering (SSR) or Pre-Rendering
You would build your pages programmatically and store/serve them all, or build them on the server-side as they are requested.
https://developers.google.com/web/updates/2019/02/rendering-on-the-web

Related

Dynamic HTML content pages like Dropbox and Soundcloud

Check out the source code of Dropbox's main page or any Soundcloud page. You can see they've got a lot of Scripts going on, and little pure HTML content (article, main, p, div). I've been searching and it seems that way of generating pages is called dynamic content/HTML (correct me if wrong).
So, the function I think it has is to be able to edit multiple external separate files in Javascript (if that's the language it uses since they're scripts) so that the HTML documentes where they're linked to are generated dynamically.
Also, other possible function would be to have one external document, which let's say it's a navigation bar, and so you place it in multiple pages, and when you have to update, you just edit the external document and not each page (hooray!).
Questions:
Is it actually named Dynamic content?
What languages does it requires besides HTML, CSS, and JS? Like PHP or ASP (supposing if any is necesary at all).
Does creating pages in that way affects negatively/positively your website positioning in Google? Since I think when Googlebot reaches the page all it see are scripts.
There are two subtly different definitions of the word dynamic, which may be confusing your search for information about this. I'll answer your questions separately for each.
Dynamic as in "generated from content held in a database"
For example, on this page your reputation score was fetched from Stack Overflow's database and injected into the HTML.
Yes, this would be referred to as dynamic content. In contrast to static content, which would just be fixed files, dynamic content would be built up from its parts for each user who requests it.
Your second set of languages (PHP, etc.) are what read from the database and spit out the corresponding HTML.
Google's bot is smart: it can render pages and will see similar content to what you get in a browser. So generating pages dynamically instead of statically won't count against the site for SEO; dynamically generating lots of pages that are very similar might count against it though.
Dynamic as in "page content that updates without you having to refresh the whole page"
For example, as you wrote your question Stack Overflow tried to find similar questions and show them to you in case it had already been asked. JavaScript was sending a request to their server and updating part of the page in response.
This would also be referred to as dynamic content. The key difference is that it's JavaScript in the page that's making further calls to the server to fetch more content, which is what you're seeing on the minimalist sites you mention. This used to be called dynamic HTML (DHTML); more modern references are more likely to discuss it in terms of AJAX or "single page website".
Typically you'd have PHP or similar running on the web server, responding to the requests for content.
Again, Google's bot is smart enough to cope with this. That won't necessarily be the case for all search engines though.

JavaScript treeview for large static website

Need suggestion for a "treeview" (navigation) JS widget for a site that is:
Really large (up to 100,000 pages)
Static - all pages are generated from a external source, and the widget is embedded in every page.
To clarify: there are no frames, and no application server. All pages are generated and placed in a file system, each page is loaded independently, that means the treeview navigation will be loaded every time as well, so it should either use multiple files and load parts of the tree on demand, or to be super-efficient.
Commercial OK.
Use mashable kind of tree. Click here for detailed architecture
All serious JS tree widgets allow dynamic loading of children. The key issue here is that most of them will send the server a query like getChildren?parent=23674 and this won't do for your case.
Since the site is static, you need to generate files with descriptions of the branches of the tree in JSON format and request those from the server as the user expands nodes in the tree. You could also create files which contain the tree children as HTML but you will be more flexible when you send data to the client and use JavaScript to convert the data into HTML (plus you will save a lot of bandwidth).
Try Yahoo's TreeView. There is an example how to load data dynamically.
Noticed that none of the links are working. However there was one written for the exact same reason, which is efficiency on large number of data. You might want to check out PixoTree, and see if it's the right tool for you.
PS. I know it's an old question, but thought it might help someone who stumbles upon this question.

If grouping front-end code helps reduce requests, why aren't more websites written on one html document?

I guess what I'm asking is that if grouping JavaScript is considered good practice, why don't more websites place the JavaScript and CSS directly into one HTML document?
why don't more websites place the JavaScript and CSS directly into one HTML document
Individual file caching.
External files have the advantage of being cached. Since scripts and styles rarely change (static) and/or are shared between pages, it's better to just separate them from the page making the page lighter.
Instead of downloading 500kb of page data with embedded JS and CSS, why not load 5kb of the page, and load from the cache the 495kb worth of JS and CSS - saves you 495kb of bandwidth and avoids an additional 2 HTTP requests.
Although you could embed JS and CSS into the page, the page will most likely be dynamic. This will make the page load a new copy all the time, making each request very heavy.
Modular code
Imagine a WordPress site. They are built using a tom of widgets made by different developers around the world. Handling that many code stuffed in one page is possible, but unimaginable.
if some code just short circuited or just didn't work on your site, it's easier to take out that code linking the external file, rather than scouring the page for the related code and possibly accidentally remove code from another widget.
Separation of concerns
It's also best practice to separate HTML from CSS and JS. That way, it's not spaghetti you are dealing with.
When you have a lot of code in a single document, it's harder to work with the code because you need more time to find the necessary string to change.
That is why it's good practice to divide code into separate files, with each of them solving its own special task, and then include them in code where it's necessary.
However, you can a write script which will join your files from the development version, which has many files, to a release version, which has fewer files, but this brings two problems:
People are often lazy to do additional coding to create this script and then change it when the structure of your project becomes more complex.
If you find a bug or add a small feature, you will need to rebuild your project again both in developed and release versions.
They separated them so that multiple webpages can use the same file. When you change a single file, multiple pages can aromatically updated also. In addition, big HTML file will cause a long time to download.

newbie question about javascript embed code?

I am a javascript newbie. I am trying to write a requirements document, and need some help describing what I am looking for. We want our application to generate a javascript snippet like this:
<script src="http://www.jotform.com/jsform/10511502633"></script>
This will load a web form.
So my question is:
- How does a single script load an entire web form? Is this a JSON?
- What is this called? Is this a cross browser javascript?
- Can anyone point me in the direction of learning more about what this is?
Thank you for your help!
The javascript file is just hosted on an external site. It appears to be dynamically generated, so feel free to use some fancy words ;) But basically, you just include it here, as if it was on your own site.
You could say "The application will generate the required script-tags to include dynamically generated javascript file from an external, third-party site".
Offcourse you need to take special cautions for cases when the include won't work, because the other site is not reachable (site is down, DNS does not work, file is moved on other webserver, your application is on an intranet/behind a proxy/firewall...). Why can't you copy their file and mirror it locally? Or use a reliable Content Delivery Network, like Google or Amazon.
There are many names for this type of inclusion. The most common being widget.
What does it actually do:
take an id of some sort as parameter
use the id to fetch some specific data (most likely from a database)
generate some js and html based on the id/data
usually this involves iframes of some sort.
To use a script rather than an html iframe has multiple advantages
you can change what is actually delivered to the users browsers without changing the include
you can resize the iframe to fit certain predefined sizes
you can inject the necessary things into the page the widget is included (of course you need to make sure this is sanctioned)
We use this all the time and we never regreted it.
If you don't want to build the widget infrastructure yourself you can always use one of the widget providers like widgetbox:
http://www.widgetbox.com/widgets/make/
With those you are up and running in no time.
This is typically called a script include.
Google have lots of these types of items, and even they call them by many names,
widgets, custom javascript, snippets, custom code, etc. It really depending on who you are writing for... I would go with "cross platform embeddable javascript code" meaning that it would need to load all its dependancies. Also specify which browsers need to be supported and what should happen is the user has javascript turned off.
EDIT :
Actually since we are talking unique IDs, you will need 2 parts probably, the user/site unique "cross platform embeddable javascript code" and whatever serverside code to support it. Basically this is an API that is accessed using your own javascript widget. Feel free you point to examples in your requirements document, programmers love examples.

Javascript and website loading time optimization

I know that best practice for including javascript is having all code in a separate .js file and allowing browsers to cache that file.
But when we begin to use many jquery plugins which have their own .js, and our functions depend on them, wouldn't it be better to load dynamically only the js function and the required .js for the current page?
Wouldn't that be faster, in a page, if I only need one function to load dynamically embedding it in html with the script tag instead of loading the whole js with the js plugins?
In other words, aren't there any cases in which there are better practices than keeping our whole javascript code in a separate .js?
It would seem at first glance that this would be a good idea, but in fact it would actually make matters worse. For example, if one page needs plugins 1, 2 and 3, then a file would be build server side with those plugins in it. Now, the browser goes to another page that needs plugins 2 and 4. This would cause another file to be built, this new file would be different from the first one, but it would also contain the code for plugin 2 so the same code ends up getting downloaded twice, bypassing the version that the browser already has.
You are best off leaving the caching to the browser, rather than trying to second-guess it. However, there are options to improve things.
Top of the list is using a CDN. If the plugins you are using are fairly popular ones, then the chances are that they are being hosted with a CDN. If you link to the CDN-hosted plugins, then any visitors who are hitting your site for the first time and who have also happened to have hit another site that's also using the same plugins from the same CDN, the plugins will already be cached.
There are, of course, other things you can to to speed your javascript up. Best practice includes placing all your script include tags as close to the bottom of the document as possible, so as to not hold up page rendering. You should also look into lazy initialization. This involves, for any stuff that needs significant setup to work, attaching a minimalist event handler that when triggered removes itself and sets up the real event handler.
One problem with having separate js files is that will cause more HTTP requests.
Yahoo have a good best practices guide on speeding up your site: http://developer.yahoo.com/performance/rules.html
I believe Google's closure library has something for combining javascript files and dependencies, but I havn't looked to much into it yet. So don't quote me on it: http://code.google.com/closure/library/docs/calcdeps.html
Also there is a tool called jingo http://code.google.com/p/jingo/ but again, I havn't used it yet.
I keep separate files for each plug-in and page during development, but during production I merge-and-minify all my JavaScript files into a single JS file loaded uniformly throughout the site. My main layout file in my web framework (Sinatra) uses the deployment mode to automatically either generate script tags for all JS files (in order, based on a manifest file) or perform the minification and include a single querystring-timestamped script inclusion.
Every page is given a body tag with a unique id, e.g. <body id="contact">.
For those scripts that need to be specific to a particular page, I either modify the selectors to be prefixed by the body:
$('body#contact form#contact').submit(...);
or (more typically) I have the onload handlers for that page bail early:
jQuery(function($){
if (!$('body#contact').length) return;
// Do things specific to the contact page here.
});
Yes, including code (or even a plug-in) that may only be needed by one page of the site is inefficient if the user never visits that page. On the other hand, after the initial load the entire site's JS is ready to roll from the cache.
The network latency is the main problem.You can get a very responsive page if you reduce the http calls to one.
It means all the JS, CSS are bundled into the HTML page.And if your can forget IE6/7 you can put the images as data:image/png;base64
When we release a new version of our web app, a shell script minify and bundle everything into a single html page.
Then there is a second call for the data, and we render all the HTML client-side using a JS template library: PURE
Ensure the page is cached and gzipped. There is probably a limit in size to consider.We try to stay under 400kb unzipped, and load secondary resources later when needed.
You can also try a service like http://www.blaze.io. It automatically peforms most front end optimization tactics and also couples in a CDN.
There currently in private beta but its worth submitting your website to.
I would recommend you join common bits of functionality into individual javascript module files and load them only in the pages they are being used using RequireJS / head.js or a similar dependency management tool.
An example where you are using lighbox popups, contact forms, tracking, and image sliders in different parts of the website would be to separate these into 4 modules and load them only where needed. That way you optimize caching and make sure your site has no unnecessary flab.
As a general rule its always best to have less files than more, its also important to work on the timing of each JS file, as some are needed BEFORE the page completes loading and some AFTER (ie, when user clicks something)
See a lot more tips in the article: 25 Techniques for Javascript Performance Optimization.
Including a section on managing Javascript file dependencies.
Cheers, hope this is useful.

Categories

Resources