why use javascript routing? - javascript

There seem to be many libraries and packages (Crossroads.js, etc.) which support this javascript routing functionality, but I'm having trouble understanding a scenario where this is valuable.
Anyone care to go soup-to-nuts on situations where this is useful?
My background is with ASP.NET (web forms) programming and some amateur javascript/jquery.

It gives you option to handle client behavior without having to reload whole page as if with cases when you'd be handling routing server side.
It opens up possibilities for way more responsive and interactive designs as instead of reloading whole page each time route changes you are able to rerender only the portion of the website that changs for a given route. At the same time it helps reducing the load on the server as you reduce the client server communication to sending only the data required to display a page for client to handle it (render views etc.)
Thanks to using backbone.js or other mvc(-like) frameworks you are able to reduce your server to expose just the REST API for working with and receiving data without having to handle the rendering and you are passing some - or even most at times - of the logic to the client.
Most of the web apps nowadays are taking advantage of client-side routing - anything from GMail to twitter.

OK- I think I understand it better now. It's just a layer of abstraction between a function caller and callee. Instead of attaching a hardcoded dependency between the caller and callee, you can introduce a routing system which will connect the two based on some configuration and provide additional functions like validation, or binding multiple callees to a caller. Then you can reference your actions with restful handles (eg "/getCoffee/decaf") which can also be dynamically constructed (since they're just strings).
I'm still pondering the relative benefits of a routing scheme vs creating a custom event.

Related

SAPUI5 / OpenUI5: More than one app in a portal

I have developed some SAPUI5 mobile apps and I'd like to merge them into a portal (with tiles) so I can switch between them as a "reputation".
Now I would like to know, what would be the "best" way to implement this case?
At the moment the apps have got a controller and views. My first idea was to build a "portal-app" which includes all the views of the other apps with an own controller but then I noticed that the performance has decreased (because all resources (OData-models etc.) load when starting the portal-app).
I also tried to link them (all with their own index.html) but this case seems not to be the right one.
So is there a way to load the views dynamicly or a whole app and how can I do that?
First of all, SAP's official solution for this problem is called SAP Fiori Launchpad. However, it's much more complex to set up (you need an underlying application server which holds SAP Fiori. You need to handle user roles and assign applications to roles). However, it's great for inspiration. (Here you can check it)
You can create a separate component which holds the references to other applications. Your applications can be referenced from Tiles.
I don't know the current implementation of your applications, but it's recommended to implement them as components (UI components if they have visual representation).
With components, you will be able to use Routing (navigating between views, or even components using hashes (urls)), which helps you to manage resources and services properly. With this you can prevent unwanted odata requests as well.
It can be a big step forward from a simple application architecture, but it's worth it.
Of course, you can implement one simple application without components. In this case you can experience the mentioned performance issues. Consider to move data intensive operations into event handlers and perform these tasks asynchronously.

Client-side or server-side framework?

My project would be a kind of craiglist, a site where users could post anouncements (evereday-life objects, cars, flat etc.). So, authentication, profile page, content creation, display the for-sale objects etc.
I have developed a very large part of the backend: I have a RESTful API in three-tier architecture developed in java. It makes the link with the db, to provide me with different urls and send me the relevant JSON.
URLs example:
http://api.mywebsite.fr/user?userid=1 sends me back:
{"user": {"username": "jdoe1234", "email", "jdoe1234#gmail.com"}}
I have urls for all actions performed on the entire site (anouncement creation, last data updates ... everything, and I've carefully declared them POST, GET, UPDATE, DELETE, etc.). There is also oAuth to protect the API from queries that are not allowed for the token.
That's all for the "server" aspect, I think that there is no problem with that.
But if all the actions are managed by the webservice, I do not see the interest that could bring me a big server-side framework like Symfony/cakePHP, Zend, etc., to make HTTP requests on my different entry points, retrieve JSON and populate the HTML.
So I looked at client framework, like Angular, Ember and so on. At first, it seemed very suitable for my case: possibility of http requests, manage what to do in case of success or error, directly exploit the resulting JSON to populate the view etc.
I didn't even manage to make my choice between angularjs and Ember, both being very similar, but with the release of Angular v2, I fear the maintainability of v1 (if I choose Angular, it will be v1 , because the majority of tutorials and questions relate to Angular 1.X).
I don't know if I'm doing the right thing by choosing client-side framework, I am afraid that they 'brident' (not sure of that word, sorry) me. Plus, it's fully instantiated in the browser, so the user can change absolutely all code and data I provide. That seems weird to me.
I want to be absolutely sure of the technology that I use in case I make this application available to the public for example. I want to do things properly, in order to avoid maintainability or security problems.
Summary: With the things I already have (webservice / api), is it a good idea to use a client framework like Angular or should I stay on big server-side framework like Symfony/Zend etc? Knowing that I position myself in the context in which this platform would be massively used (craiglist comparable traffic).
I'd say - depends whether you want to be more frontend guy or backend guy in future. If you want to be full stack developer then it doesn't apply.
In my opinion, both Symfony/Zend or other big server-side frameworks aren't so exciting as dynamic frontend JavaScript frameworks like Ember/Angular/React.
Also, if you have already RESTful API and OAuth authentication implemented in backend part I'd go with Ember. Why? Ember Data is great tool for talking to backend API. It's mature, it lazily loads records when they're needed and it's very customizable.
it's fully instantiated in the browser,so the user can change
absolutely all code and data I provide...
Ember has built in security like sanitizing data which is rendered in it's templating language - HTMLBars. Also, there's CORS and content security policy (CSP) standard which is implemented in Ember.
I want to be absolutely sure of the technology that I use in case I
make this application available to the public for example. I want to
do things properly, in order to avoid maintainability or security
problems .
In Ember you can create mature, secure, production-ready applications, but you need to comfortable with your Ember skills to some degree to build such ambitious web application, but it's part of building every application.
With the things that i already have(webservice / api), is it a good
idea to use a client framework like Angular?
Yes, it's very popular solution to use MEAN stack or go with Ember + RESTful API.
Why should I choose Ember instead of Angular (which have a larger
community/tutorials/answered questions) ?
Angular has larger community/tutorials/answered questions, but when I started some side project with Angular to learn its possible advantages over Ember, I was surprised how there was no consensus in it's community for doing one thing. So, instead of fast search how to declare and use directives (I think it was the thing that confused me) I have to do another research which way is the best. Also, there are lots of ways to setup project (where to put custom directives, different Angular objects) and you have to do another research which one to choose. I ended up using repo healthy-gulp-angular as my template, but you can see it hasn't been updated for 8 months, but I think during these 8 months Angular had a lot of changes and I'm not sure if this repo is the best choice.
In Ember you have Ember CLI tool which is built with Convention over Configuration principle. You have also Ember Data which utilizes JSON API standard - if you don't have JSON API compliant server side right now, you can write custom adapter to normalize server responses or change how backend replies. In Ember you don't have all that headache and different best solutions to do 1 basic thing depending who you ask.
What means "Single page application" ?
Single-page application is basically a page which doesn't have to reload all assets and HTML when you navigate. It's advantage over PHP - when user moves to another location he downloads only new data for that route. More info here.
Does those frameworks allow me to create real routes ? (
www.myapp/profil/userid etc )
Yes, of course. You don't even need # in your URL. With simple rewrite rule and small amount of logic for profile route and specified path profile/:userid, when user will open URL www.myapp/profile/userid he will be automatically taken to profile route, and userid would be interpreted as route parameter, so you can take this userid and find user record from the store in model hook.
Client = speed, Server = stability
JS frameworks updates once per week
Non-Js back-end once per year
Client side depends to behavior depending on browser
Back is related only on machine but not on environment
I chose FE coz I tired to debug code by writing variables values to database to actually see what is going on in controllers -_-

Using Push State in a Marionette Application for SEO

I did my homework and read through this mini series about pushstate:
http://lostechies.com/derickbailey/2011/09/26/seo-and-accessibility-with-html5-pushstate-part-2-progressive-enhancement-with-backbone-js/
From what understand the hard part of implementing push state is making sure that my server side is going to serve the actual pages for the corresponding urls.
I feel like this is going to be a HUGE task, previously I was just sending a simple jade page as simple as:
body
header
section
div#main
footer.site-footer
div.footer-icons.footer-element
div.footer-element
span.footer-link Contact Us
span.footer-link Terms of Service
script(src='/javascripts/lib/require.js', data-main='/javascripts/application.js')
and I was doing all the rendering with my Marionette Layouts and Composite Views, and to be honest it was a bit complicated.
So from what I understand I need to replicate all that complicated nesting/rendering using jade on the server side for pushState to work properly?
I used underscore templates in the client-side, what is an easy way to re-use them on the server side?
I depends on what you want to do...
To "just" use pushState, the only requirement is that your server returns a valid page for each URL that can be reached by your app. However, the content returned by the server does NOT have to match what will get rendered client side. In other words, you could use a "catch all" route on the server side that always returns the page you have above, and then let Backbone/Marionette trigger its route to handle the rendering and display.
That said, if you want to use pushState for SEO, you likely want to have the static HTML sent by the server on the first call, then have the Marionette app start to enhance the interactivity. In this case, it is much more complex and you might want to experiment with using options to trigger the proper behavior (e.g. using attachView when enhancing existing HTML, showing views normally after that initial case).
Push state can work properly WITHOUT your server actually serving your application in the way that is suggested.
Push state is merely an alternative to hashbang url's, and it is supported in modern browsers. Check out the history docs here, you will see there is no mention of having your site serve your application statically at the url's for your application (but bear in mind it is opt-in).
What the article you reference is saying, is that for good SEO, you should do this. That's because you cannot guarantee when a search engine crawls your site, that it will execute your javascript, and pick up your routes etc. So serving the site statically is simply to give the search engine a way to get your content without executing any javascript.
Like you say, by doing this you are essentially building two sites in parallel, and it does literally double the amount of work you need to do. This may be ok if you're building a relatively simple site filled with static content, but if you are creating a complicated application, then it is probably too much in most situations.
Although I would add, if you are building an application, then SEO doesn't really matter, so it's a null point.

Using node.js to serve content from a Backbone.js app to search crawlers for SEO

Either my google-fu has failed me or there really aren't too many people doing this yet. As you know, Backbone.js has an achilles heel--it cannot serve the html it renders to page crawlers such as googlebot because they do not run JavaScript (although given that its Google with their resources, V8 engine, and the sobering fact that JavaScript applications are on the rise, I expect this to someday happen). I'm aware that Google has a hashbang workaround policy but it's simply a bad idea. Plus, I'm using PushState. This is an extremely important issue for me and I would expect it to be for others as well. SEO is something that cannot be ignored and thus cannot be considered for many applications out there that require or depend on it.
Enter node.js. I'm only just starting to get into this craze but it seems possible to have the same Backbone.js app that exists on the client be on the server holding hands with node.js. node.js would then be able to serve html rendered from the Backbone.js app to page crawlers. It seems feasible but I'm looking for someone who is more experienced with node.js or even better, someone who has actually done this, to advise me on this.
What steps do I need to take to allow me to use node.js to serve my Backbone.js app to web crawlers? Also, my Backbone app consumes an API that is written in Rails which I think would make this less of a headache.
EDIT: I failed to mention that I already have a production app written in Backbone.js. I'm looking to apply this technique to that app.
First of all, let me add a disclaimer that I think this use of node.js is a bad idea. Second disclaimer: I've done similar hacks, but just for the purpose of automated testing, not crawlers.
With that out of the way, let's go. If you intend to run your client-side app on server, you'll need to recreate the browser environment on your server:
Most obviously, you're missing the DOM (Document Object Model) - basically the AST on top of your parsed HTML document. The node.js solution for this is jsdom.
That however will not suffice. Your browser also exposes BOM (Browser Object Model) - access to browser features like, for example, history.pushState. This is where it gets tricky. There are two options: you can try to bend phantomjs or casperjs to run your app and then scrape the HTML off it. It's fragile since you're running a huge full WebKit browser with the UI parts sawed off.
The other option is Zombie - which is lightweight re-implementation of browser features in Javascript. According to the page it supports pushState, but my experience is that the browser emulation is far from complete - however give it a try and see how far you get.
I'm going to leave it to you to decide whether pushing your rendering engine to the server side is a sound decision.
Because Nodejs is built on V8 (Chrome's engine) it will run javascript, like Backbone.js. Creating your models and so forth would be done in exactly the same way.
The Nodejs environment of course lacks a DOM. So this is the part you need to recreate. I believe the most popular module is:
https://github.com/tmpvar/jsdom
Once you have an accessible DOM api in Nodejs, you simply build its nodes as you would for a typical browser client (maybe using jQuery) and respond to server requests with rendered HTML (via $("myDOM").html() or similar).
I believe you can take a fallback strategy type approach. Consider what would happen with javascript turned off and a link clicked vs js on. Anything you do on your page that can be crawled should have some reasonable fallback procedure when javascript is turned off. Your links should always have the link to the server as the href, and the default action happening should be prevented with javascript.
I wouldn't say this is backbone's responsibility necessarily. I mean the only thing backbone can help you with here is modifying your URL when the page changes and for your models/collections to be both client and server side. The views and routers I believe would be strictly client side.
What you can do though is make your jade pages and partial renderable from the client side or server side with or without content injected. In this way the same page can be rendered in either way. That is if you replace a big chunk of your page and change the url then the html that you are grabbing can be from the same template as if someone directly went to that page.
When your server receives a request it should directly take you to that page rather than go through the main entry point and the load backbone and have it manipulate the page and set it up in a way that the user intends with the url.
I think you should be able to achieve this just by rearranging things in your app a bit. No real rewriting just a good amount of moving things around. You may need to write a controller that will serve you html files with content injected or not injected. This will serve to give your backbone app the html it needs to couple with the data from the models. Like I said those same templates can be used when you directly hit those links through the routers defined in express/node.js
This is on my todo list of things to do with our app: have Node.js parse the Backbone routes (stored in memory when the app starts) and at the very least serve the main pages template at straight HTML—anything more would probably be too much overhead /processing for the BE when you consider thousands of users hitting your site.
I believe Backbone apps like AirBnB do it this way as well but only for Robots like Google Crawler. You also need this situation for things like Facebook likes as Facebook sends out a crawler to read your og:tags.
Working solution is to use Backbone everywhere
https://github.com/Morriz/backbone-everywhere but it forces you to use Node as your backend.
Another alternative is to use the same templates on the server and front-end.
Front-end loads Mustache templates using require.js text plugin and the server also renders the page using the same Mustache templates.
Another addition is to also render bootstrapped module data in javascript tag as JSON data to be used immediately by Backbone to populate models and collections.
Basically you need to decide what it is that you're serving: is it a true app (i.e. something that could stand in as a replacement for a dedicated desktop application), or is it a presentation of content (i.e. classical "web page")? If you're concerned about SEO, it's likely that it's actually the latter ("content site") and in that case the "single-page app" model isn't appropriate; you really want the "progressively enhanced website" model instead (look up such phrases as "unobtrusive JavaScript", "progressive enhancement" and "adaptive Web design").
To amplify a little, "server sends only serialized data and client does all rendering" is only appropriate in the "true app" scenario. For the "content site" scenario, the appropriate model is "server does main rendering, client makes it look better and does some small-scale rendering to avoid disruptive page transitions when possible".
And, by the way, the objection that progressive enhancement means "making sure that a user can see doesn't get anything better than a blind user who uses text-to-speech" is an expression of political resentment, not reality. Progressively enhanced sites can be as fancy as you want them to from the perspective of a user with a high-end rendering system.

Advantages / Disadvantages to websites generated with Javascript

Two good examples would be google and facebook.
I have lately pondered the motivation for this approach. My best guess would be it almost completely separates the logic between your back-end language and the markup. Building up an array to send over in JSON format seems like a tidy way to maintain code, but what other elements am I missing here?
What are the advantages / disadvantages to this approach, and why are such large scale companies doing it?
The main disadvantage is that you have some pain with content indexation of your site.
For Google you can somewhere solve the problem by using Crawling scheme. Google supports crawling that allows you to index dynamically (without page reload) generated content of your page.
To do this your virtual links must be addresses like so: http://yoursite.com/#!/register/. In this case Google requests to http://yoursite/register/ to index content of the address.
When clicking on virtual link there is no page reload. You can provide this by using onclick:
<a href='http://yoursite.com/#!/register/' onclick='showRegister()'>Register</a>
Virtual advantage is that content of a page changed without reloading of the page. In my practice I do not use Javascript generation to do this because I build my interface in fixed positions. When page reloads user does not notice anything because elements of the interface appears in expected places.
So, my opinion that using of dynamic page generation is a big pain. I think Google did it not to separate markup and backend (it's not a real problem, you can use complex structure of backend-frontend to do that) but to use advantages of convenient and nice representation for users.
Advantages
View state is kept on the client (removing load from the server)
Partial refreshes of pages
Server does not need to know about HTML which leads to a Service Oriented Architecture
Disadvantages
Bookmarking (state in the URL) is harder to implement
Making it searchable is still a work in progress
Need a separate scheme to support non-JS users
I don't 100% understand your question, but I'll try my best here...
Google and Facebook both extensively use JavaScript across all of their websites and products. Every major website on the web uses it.
JavaScript is the technology used to modify the behavior of websites.
HTML => defines structure and elements
CSS => styling the elements
Scripting languages => dynamically generating elements and filling them with data
JavaScript => modifies all of the above by interacting with the DOM, responding to events, and styling elements on the fly
This is the 'approach' as you call it to every website on the web today. There are no alternatives to JavaScript/HTML/CSS. You can change the database or scripting language used, but JavaScript/HTML/CSS is a constant.
Consider an example of a simple form validation ...
client sends a request to a server ... the server will execute the server side code containing validation logic and in a response ...the server will send the result to the client ....
if the client has the capability to execute/process (that can be executed on the client side ...) the form ...(perform validation)..the client wont need send request to the server ...and wait for the server to respond to that request ...
i suggest you to take a look at Google Page Speed best practice http://code.google.com/intl/it-IT/speed/page-speed/ to see what are the factors that makes a good page ... generating a page with javascript seems cool because of separation of ui and logic , but it is a totally inefficient in practice

Categories

Resources