Is it recommended to render your whole website with Javascript? (Optimizing) [duplicate] - javascript

This question already has answers here:
Should I load an entire html page with AJAX?
(4 answers)
Closed 9 years ago.
I read somewhere that pros print out only one line html and one line javascript per page and rest of the rendering process made by the client. I've found this very promising so I thought I'd use the following structure to render pages:
<html>
<head>
{{Title}}
{{Meta tags}}
{{CSS via CDN}}
</head>
<body>
{{Dynamic JSON array containing the datas of the current page}}
{{Javascript libraries via CDN}}
{{JS files that contain HTML templates via CDN}}
</body>
</html>
So the questions are:
Is it really a good practice?
Is it worth it to load the HTML templates via CDN?
SEO is secondary, but of course I'd render some necessery meta tags.
Thanks for your answers!

Is it really a good practice?
That's rather subjective. It depends on how much you value reliability, performance and cost.
You can get a performance boost, but you either:
Have a very fragile system that will completely break if a JS fail fails to load for any reason, trips over a browser bug, etc or
Have to start by building all your logic server side and then duplicate all the work client side and use pushState and friends to have workable URIs.
SEO is secondary, but of course I'd render some necessery meta tags.
Leaving aside questions of meta tags being necessary for SEO… rendering them with client side JavaScript is pointless. If the content is rendered only with client side JS then search engines won't see it at all.
Is it worth it to load the HTML templates via CDN?
Again, it depends. Using a CDN can cause your static files to be delivered faster, but they are an added expense and require a more complex build system for your site (since you have to deploy to multiple servers and make sure the published URIs match up).

Ofcourse, this is a good practice (if SEO is really secondary importance) to
Dynamically loading JSON array containing the datas of the current page
Javascript libraries being loaded via CDN
JS files that contain HTML templates via CDN
Besides you can minify your javascript and gzip it
Client script is much faster than server script as far as the performance is concerned

There are of course pros and cons of rendering website in the client.
Pros:
You can reuse given template. No need for asking server to render given UI element so it's faster.
When using such tools like Meteor.js you can even go further and when rerendering template only replace parts that changed.
You can include given module/subpage (when required) so you can still avoid loading all the data at the first load.
When not rendering website on server, it can handle more requests.
Websites are more dynamic. User gets feeling of immediacy when swiching page.
Cons:
It's not SEO friendly but there are easy to use tools that help to deal with it (there is one for Meteor.js).
The calcuation is easy :). Use dynamic JS website rendering :).

It makes your initial render slower (browsers are extremely well optimized for rendering HTML), which can potentially affect your search rankings, and it is somewhat less amenable to caching. Twitter tried a JavaScript-and-JSON-only architecture and ended up going back to serving a prerendered page along with the JavaScript app because it gave better perceived response times. (Again, the actual response times aren't necessarily better, but the user sees the response sooner.)

Related

PHP template system vs javascript AJAX template

My PHP template looks like this:
$html=file_get_contents("/path/to/file.html");
$replace=array(
"{title}"=>"Title of my webpage",
"{other}"=>"Other information",
...
);
foreach(replace AS $search=>$replace){
$html=str_replace($search,$replace,$html);
}
echo $html;
I am considering switching to a javascript/ajax template system. The AJAX will fetch the $replace array in JSON format and then I'll use javascript to replace the HTML.
The page would then be a plain .html file and a loading screen would be shown until the ajax was complete.
Is there any real advantages to this or is the transition a waste of time?
A few of the reasons I think this will be beneficial:
Page will still load even if the Mysql or PHP services are down. If the ajax fails I can handle it with an error message.
Bot traffic (and anything else that doesnt run JS) will cause very little load to my server since the ajax will never be sent.
Please let me know what your thoughts are.
My 2cents is it is better to do the logic on the template side (javascript). If you have a high traffic site you can off load some of the processing to each computer calling the site. Maybe less servers.
With Javascript frameworks like AngularJs the template stuff is pretty simple and efficient. And the framework will do caching for you.
Yes, SEO can be an issue with certain sites. There at proxy tools you can put in place that will render the site and return the static html to the bot. Plus I think some bots render javascript these days.
Lastly, I like to template on the front-end because I like the backend to be a generic data provider (RESTful API). This way I can build a generic backend that drives web / mobile and other platforms in a generic way. The UI logic can be its separate thing in javascript.
But it comes down to the design needs of your application. I build lots of Software as a service applications so a single page application works well for me.
I've worked with similar design pattern in other projects. There are several ways to do this and the task would involve managing multiple project or application modules. I am assume you are working with a team of developers and not using either PHP or JavaScript MVC framework.
PHP Template
For many reasons, I'm against using “search and replace” method especially using server-side scripting language to parse HTML documents as a templating kit.
Why?
As you maintain business rules and project becomes larger, you will
find yourself reading through a long list of regular expressions,
parse HTML into DOM, and/or complicated algorithms for searching
nodes to replace with correct text(s).
If you had a placeholder, such as {title}, that would help the
script to have fewer search and replace expressions but the design
pattern could lead to messy sharing with multiple developers.
It is ok to parse one or two HTML files to manage the output but not
the entire template. The network response could be slower with
multiple and repetitive trips to server and that's just only for
template. There could be other scripts that is also making trips to
the server for different reason unrelated to template.
AJAX/JavaScript
Initially, AJAX with JavaScript might sound like a neat idea but I'm still not convinced.
Why?
You can't assume web browser is JavaScript-enabled in every mobile
or desktop. You might need to structure the HTML template in few
ways to manage the output for non-JavaScript browsers. You might
need to include <noscript> and/or <iframe> tags on every page. And,
managing alternative template for non-JavaScript browser can be
tedious.
Every web browser interpret JavaScript differently. Most developers
should know few differences between IE, FireFox, Chrome, Safari, and
to name few. You might need to create multiple JavaScript files to
detect then load JavaScript for that specific web browser. You
update one feature, you have to update script for all web browsers.
JavaScript is visible in page source. I wouldn't want to display
confidential JavaScript functions that might include credentials,
leak sensitive data about web services, and/or SQL queries. The idea
is to secure your page as much as possible.
I'm not saying both are impossible. You could still do either PHP or JavaScript for templating.
However, my “ideal” web structure should consist of a reliable MVC like Zend, Spring, or Magnolia. Those MVC framework include many useful features such as web services, data mapping, and templating kits. Granted, it's difficult for beginners with configuration requirements to integrate MVC into your project. But in the end, you could delegate tasks in configurations, MVC concepts, custom SQL queries, and test cases to developers. That's my two cents.
I think the most important aspects you forgot are:
SEO : What about search engine bots ? They wont be able to index your content if it is set by javascript only.
Execution and Network Latency : When your service is working, the browser will wait until the page is loaded (let's say 800ms) before making the extra Ajax calls to get your values. This might add an extra 500ms to get it (depending on network speed and geographic location...). If you have sent all the generated data by your server, you would have spent only ~1ms more to prepare the complete response. You would have a lot of waiting on a blank page.
Caching : You could cache the generated pages on your web app. That way your load will be minimized as well. And also, if you still want to deliver content while your backend services (MySQL/PHP..) are down you could even use Apache or Nginx caching.
But I guess it really depends on what you want to do.
For fast and simple pages, which seems to be your case, stick with backend enhancements.
For a dynamic/interactive app which can afford loading times, and doesn't care about SEO, you can delegate most things to the front-end. But then use an advanced framework like Angular, to handle templating, caching, etc...

SEO for html single-page site via quasi-html content

Suppose, I have a javaScript-heavy single page web application. My Javascript render dom directly from model / datasource (Json).
I came up with an approach to generate simple html from datasource (on backend). This html is required only for search engines to index. After page is loaded, JavaScript will replace this quasi-html with the proper UI. Quasi-html can be removed from layout with display:none to avoid performance penalty on the browser.
Will it work?
Also I am concerned about legitimacy of the approach.
Thoughts?
It should work giving the search engines content to craw even if they don't read javascript. Now bots evolve and they read quite a bit of javascript nowadays, I've created a page that only has 2 sentences onBeforeLoad and uses Ajax to get the rest of the content and I see Google indexing a lot of the keywords delivered by Ajax. A problem would be misleading the search bot, like putting in content irrelevant to your other page content - something the bot might pick up at some point and penalize you for it. "I am concerned about legitimacy of the approach" - I wouldn't be, keep code valid and ride on

Are javascript templates only useful for inserting small items into the DOM?

I recent came across Javascript templates and have become quite intrigued.
I am building a large PHP application using the MVC pattern. Templating is handled by the rather awesome Twig.
I recently came across a javascript implementation of twig.
I have also read quite a bit about using javascipt template engines.
Now, in my application, the application generates a the full page for standard requests as fallback for users without javascript. For AJAX requests, it can generate the content part of the page (no <head>, <body> etc).
The ajax response object is currently just a the rendered HTML content, which is then inserted into DOM.
Should I instead return a response object containing a compiled javascript template and the objects to be inserted into the template? What are the benefits of doing that?
From the posts I have read, the javascript templates were only small snippets representing a small part of a page, for example displaying a comment on a blog post during the instant that the user has submitted it.
Are javascript templates only useful for inserting these sort of small "pieces" in a page?
Yes
A recent project I was on got "client-side template fever" and we used the dang things for every single template.
With every template library I've used (which is two or three), the error messages you get are not very good. If you have a huge template that operates on a fair amount of data, you'll quickly find the u.foo is null or not an object error message increasingly frustrating.
The best-practices I've settled on is:
Return a full HTML snippet (from the server) if it's a template that is seldom loaded. If you are only loading that HTML once on a page, then you might as well send it down populated right? This also encourages you to keep your logic up on the server, where it probably belongs.
Use client-side templates for small, repeated templates. Your blog comment example is probably a good one. I've found the most success when my client-side templates are pretty small (< 10 lines)
Use a logic-less template engine. The more logic allowable in a template, the harder they are to read/maintain. Plus, some of that logic should probably be in your business layer, not down in some JavaScript template. In other words, they force you to separate your presentation from your logic (which is good).
PS: It is cool that both your client and server-side templates could use the same templating engine. This will make developers on your project much more productive.
Depending on the scale and requirements of your application, you should take into considerations the following:
don't go rampant on Ajax; Ajax is not WebSockets, so use it sparingly. Plus, client-side execution speed is always key; AJAX is slow when compared to dumping as much resources as possible + using them when you need them; for example, you can send to javascript userdata = {name:'xxxx',address;'yyyy', ...} and using that, instead of requesting name and address via AJAX only when you need them.
it is recommended to use a global PHP var $sendData (or something like that) and right after plasting HTML, you send the $sendData with an easy <script>data = <?php echo json_encode($sendData); ?></script>
javascript templates add up to execution speed. which makes it reasonable to do the alternative, that is, separating everything that is dynamic and caching static resources like javascript functions
you can't, and I quote return a response object containing a compiled javascript template, not without residing to some server-side javascript engine that does the compile job prior to returning it
for your personal welfare, it will always boil down to how fast, easy and painlessly you respond to maintaining the application; there's no point in using super beta experimental frameworks, ports and kits over which you have limited control
Work smart, not hard.
Good luck mate.
You could also look into the Distal http://code.google.com/p/distal templating engine.

Why do web applications send HTML over the wire?

This question pertains to web applications. I have very little web app development experience, so might be missing some very obvious points/issues. Please point them out.
As I understand, in most web applications, a web server sends HTML over the wire to a client (browser). This happens every time a HTTP request is made. I feel this is very wasteful of bandwidth.
1) Since browsers can run JavaScript, why don't we just send a JavaScript program which can generate the webpage's HTML content (which the browser then renders).
2) Further a browser might cache the JavaScript program and next time the server only need send the data. The protocol might involve the browser sending the "program version" it has.
Consider an example of a relatively simple website Hacker News [http://news.ycombinator.com]. Let us separate the data (30 posts + their metadata) from its presentation. Assuming 1) above, the server can just send the data (say in JSON) + a JavaScript program to generate HTML. This gist shows the idea. The data for the 30 posts is in JSON [http://www.json.org/js.html] format. For this particular example the data transferred is cut in 1/2 (size of data+JavaScript / size of HTML). Further if browsers can do 2) above, it reduces the data transferred on each visit to 1/4 (size of data / size of HTML). [Note: this analysis is without considering compression; gzip,deflate is very successful in reducing the size of HTML. But isn't prevention better than cure?]
I see atleast the following advantages of this :-
* For most web pages, it will reduce the size of data transferred over the wire.
* Forces web apps to separate data from its presentation.
Disadvantages might include - more complex browsers, time to run the JavaScript program to generate HTML (this might get offset by the reduction in data size).
Now my question is - why are web applications not developed this way, or, why do web applications send HTML over the wire? Surely the web server (sending out HTML) doesn't care about HTML at all, so why should it, first, generate it, and then send it over the wire?
There are a few reasons, some of them historical this is by no means a complete list but just some of my experiences:
HTML predates JS, and a lot of scripts and libraries predate JS
Older browsers (think IE<=6) had rubbish, inconsistent JS engines, their rendering engines were much more consistent in how they treat HTML. So many more libraries and scripts predate consistent JS
It is a nightmare to debug applications written as you suggest if they are not constructed right (we have one at my work, it takes 30 minutes to find where a piece of html is actually generated)
It is a lot more work to do it right - why not use templates or static docs or something much simpler
Its not really a problem - HTML compresses really well
What you suggest is done - its called AJAX (OK, so ajax is more general than this but you all know what i mean)
It simply doesn't work for most plain-text user agents including those used by most search engines. If this page is serving most of your content, its generally a good idea to make it easy for Google to parse
Well the obvious reason on why this is the case is that JavaScript wasn't around when we started sending HTML around, and HTML was an improvement to sending around plaintext documents.
The reason we don't do this now: we eschew complex solutions to problems that aren't really problems.
Average internet connections download nearly 1M bytes per second, and web browsers are quite adept at parsing and starting to render this HTML before it's even all ready to be. They're also great at parallelizing the downloading of resources on the page. If we want to save a few bytes at the cost of some compute cycles, we gzip content before sending it. Problem solved.
And for the record, we do this with AJAX in complex webpages (checkout Github's source browsing for a great example of how awesome this can be).
What you suggest can, and is, done. Remember, web pages used to be static documents. Full blown web-based applications are a relatively recent idea.
I might also suggest that it isn't necessarily more efficient, especially when your pages are sent gzipped.
What you suggest is basically what a JavaScript full stack framework like ExtJS does. You can create rich, data intensive applications without writing any HTML -- well, only enough to reference the necessary .js libraries. The complex DOM needed for layouts, grids, forms etc is all created by the framework.
The simple answer is that HTML is older. Why is C99 not fully implemented with a lot of compilers? They figure 1989 is new enough for them. Also, JavaScript exercises a lot more control over people's browsers than they seem to want. Conditional statements and encoded data pose a security concern, and some people want to keep that can of worms closed to begin with. True, HTML is a very inefficient markup, but the size is insignificant compared to the images you download from the internet. That favicon takes up as much data as the page itself, and it's only 16 pixels across.
A good reason that the server-side code of a web application might do lots of HTML template work on the server side is that in many server environments it's not made easy to bundle up server-side data structures (object graphs) for easy delivery to the client. There may be information kept in server-side data structures that really shouldn't be delivered out to the client. Thus in order to send out a "pure" data-only response, the server would have to trim off sensitive data before delivering out the JSON. That's not an unsolvable problem, but I don't know of many server frameworks that facilitate a solution.
The server has direct, unfettered access to the database and to everything else that makes an application work: user preferences, history, account details, system settings, etc. To build an application that's client-centric for rendering purposes would mean concocting ways of keeping all that information intact and up-to-date on the client. For a lot of applications, that might not be terribly easy.
Finally, it's only relatively recently that it would make sense to trust a browser to provide a stable enough platform for building a long-lived "application environment" as a continually-updating web page. By building a web app such that pages are sometimes completely reloaded, there are lots of little "reboots". That's a cheap and dumb way of keeping a lid on at least some kinds of memory leaks.
Most implementations of sites with heavy Javascript use won't start executing until the DOM has fully loaded; then you'll get every page with 'loading screens' when the page wrapper has downloaded, but none of the content has.
Also, do remember that not all users have Javascript enabled, and not all browsers support high-level Javascript (think mobiles).
I would send HTML in a response if I wanted my application to work without Javascript. I would write HTML rendering code in my server-side language (most of the time not Javascript), which could then be used for two purposes: serving whole HTML pages, and serving bits of HTML in response to XHRs.
If the Javascript code is restricted to things like reporting UI events and replacing innerHTML with server-generated code, I don't have to duplicate any of my application logic across languages/frameworks. This duplication problem is one of the reasons why server-side Javascript is getting people excited.

XML, XSLT and JavaScript

I'm having some trouble figuring out how to make the "page load" architecture of a website.
The basic idea is, that I would use XSLT to present it but instead of doing it the classic way with the XSL tags I would do it with JavaScript. Each link should therefore refer to a JavaScript function that would change the content and menus of the page.
The reason why I want to do it this way, is having the option of letting JavaScript dynamically show each page using the data provided in the first, initial XML file instead of making a "complete" server request for the specific page, which simply has too many downsides.
The basic problem of that is, that after having searched the web for a solution to access the "underlying" XML of the document with JavaScript, I only find solutions to access external XML files.
I could of course just "print" all the XML data into a JavaScript array fully declared in the document header, but I believe this would be a very, very nasty solution. And ugly, for that matter.
My questions therefore are:
Is it even possible to do what I'm
thinking of?
Would it be SEO-friendly to have all
the website pages' content loaded
initially in the XML file?
My alternative would be to dynamically load the specific page's content using AJAX on demand. However, I find it difficult to find a way that would be the least SEO-friendly. I can't imagine that a search engine would execute any JavaScript.
I'm very sorry if this is unclear, but it's really freaking me out.
Thanks in advance.
Is it even possible to do what I'm thinking of?
Sure.
Would it be SEO-friendly to have all the website pages' content loaded initially in the XML file?
No, it would be total insanity.
I can't imagine that a search engine would execute any JavaScript.
Well, quite. It's also pretty bad for accessibility: non-JS browsers, or browsers with a slight difference in JS implementation (eg new reserved words) that causes your script to have an error and boom! no page. And unless you provide proper navigation through hash links, usability will be terrible too.
All-JavaScript in-page content creation can be useful for raw web applications (infamously, GMail), but for a content-driven site it would be largely disastrous. You'd essentially have to build up the same pages from the client side for JS browsers and the server side for all other agents, at which point you've lost the advantage of doing it all on the client.
Probably better to do it like SO: primarily HTML-based, but with client-side progressive enhancement to do useful tasks like checking the server for updates and printing the “this question has new answers” announce.
maybe the following scenario works for you:
a browser requests your xml file.
once loaded, the xslt associated with the xml file is executed. result: your initial html is outputted together with a script tag.
in the javascript, an ajax call to the current location is made to get the "underlying" xml-dom. from then on, your javascript manages all the xml-processing.
you made sure that in step 3, the xml is not loaded from the server again but is taken from the browser cache.
that's it.

Categories

Resources