API, back-end and front-end as all three separate components - javascript

I tried to find something on the internet but could not find anything similar. So I'm asking it here:
SITUATION: I have a big API which does some heavy calculations and has a lot of functionality. There are some clients using this API and has implemented it in their software. Now I want to write some front-end for that API so some users could manage their workflow more easily.
CONSIDERED SOLUTION: I am considering of making a separate back-end application which would use an API and serve for the front end (look at the picture attached). The backend would do authorization / caching / data-adapting operations.
QUESTION: But I have never ever crossed such app design with three layers API-BE-FE. So is it worth making things this way? Are there any significant drawbacks? Is it safe to put some oauth authorisation in the back-end side, not api itself? Like what are your thoughts about it?

I agree with your design. You have a specific API which is meant to serve specific endpoints. This way you are separating your concerns, as you can add to your BE things that aren't related to the API itself, but are related to the FE.
Also, many APIs are using credentials and keys so you can implement a similar functionality.

Your considered solution on architecture looks good.
The most biggest advantage to implement a back-end between front-end and API is, it can provide good separation of concerns. It usually happens around me that front-end engineers ask API engineers every time when they need new endpoints. It looks just cooperation, but sometimes goes too much. This kind of conversation has potential to result in making too many endpoints in API which shouldn't have had. I am not really sure what the architecture policy of API team in your company is, but just to allow API to be growing big for front-end is not good. The more functionalities the API has now, the worse it will easily be.
In your plan, you are trying to implement back-end to access API for front-end. It was similar to the architecture of BFF (Back-end For Front-end) described by Sam Newman (http://samnewman.io/patterns/architectural/bff/). With this concept, you can implement a back-end as a kind of a gateway which handles front-end specific requests to API. Back-end can even buffer the potential influence to API caused by change in front-end if needed. Everything can be kept well separated.
In BFF, I don't think that back-end plays a role to provide application-related functionalities such as authorization, caching, and data-adapting operations, but this depends on you. You can implement new APIs to handle those functionalities and have back-end just be a gateway which ties them up. It would also work just to put those things into back-end as long as it is not too fat.
Drawback?
The possible drawback, I suppose, is maintainability of scaling. This totally depends on the infrastructure team or members you work with, but on production, API and backend will run on each different server or stack, so you might need to take care of scaling consistency among them under the large amount of traffic to your application. However, this independency could also be advantageous in monitoring hardware resources. You'd better to find a sweet spot.

Related

How to consume a HATEOAS REST API in Angular?

I'm working on an Angular 4 front-end for an API built by another team. The API follows HATEOAS and provides me with hypermedia links with every single response.
I know the shape of the API and I figure I can just hard-code the URLs into Angular Services with minimal fuss. However, a colleague (who is a backend developer) is trying to convince me that I should take full advantage of the hypermedia because it will mean less coupling between the frontend and backend (and potential breakage if the API changes).
However, I'm stumped on how I'd even go about implementing a simple HATEOAS pattern using Angular's built-in Http service. How would I store/share the hypermedia/URL information in a way that doesn't couple all the services together and make them hard-to-test? There seems to be no examples out there.
Would trying to create a HATEOAS-friendly HTTP client even be a good idea, or is it likely not worth the trouble?
Your colleague is right, you should use the meta information that the back-end provides. In this way you are not putting responsibility on the client that doesn't belong there. Why should the client know from where to fetch the entities? Storing the entities (in fact the data in general) is the responsibility of the back-end. The back-end owns the data, it decides where to put it, how to access it, when to change the location or the persistence type, anything related to storing the data.
How would I store/share the hypermedia/URL information in a way that doesn't couple all the services together and make them hard-to-test?
Why do you think using HATEOAS makes the testing harder? It does not, in fact not using it makes the testing harder as the URLs are static which makes the back-end non-stub-able.
You can extract the information from the back-end response and store it as meta-information in the angular model, on a _meta key or something like that.

Client-side or server-side framework?

My project would be a kind of craiglist, a site where users could post anouncements (evereday-life objects, cars, flat etc.). So, authentication, profile page, content creation, display the for-sale objects etc.
I have developed a very large part of the backend: I have a RESTful API in three-tier architecture developed in java. It makes the link with the db, to provide me with different urls and send me the relevant JSON.
URLs example:
http://api.mywebsite.fr/user?userid=1 sends me back:
{"user": {"username": "jdoe1234", "email", "jdoe1234#gmail.com"}}
I have urls for all actions performed on the entire site (anouncement creation, last data updates ... everything, and I've carefully declared them POST, GET, UPDATE, DELETE, etc.). There is also oAuth to protect the API from queries that are not allowed for the token.
That's all for the "server" aspect, I think that there is no problem with that.
But if all the actions are managed by the webservice, I do not see the interest that could bring me a big server-side framework like Symfony/cakePHP, Zend, etc., to make HTTP requests on my different entry points, retrieve JSON and populate the HTML.
So I looked at client framework, like Angular, Ember and so on. At first, it seemed very suitable for my case: possibility of http requests, manage what to do in case of success or error, directly exploit the resulting JSON to populate the view etc.
I didn't even manage to make my choice between angularjs and Ember, both being very similar, but with the release of Angular v2, I fear the maintainability of v1 (if I choose Angular, it will be v1 , because the majority of tutorials and questions relate to Angular 1.X).
I don't know if I'm doing the right thing by choosing client-side framework, I am afraid that they 'brident' (not sure of that word, sorry) me. Plus, it's fully instantiated in the browser, so the user can change absolutely all code and data I provide. That seems weird to me.
I want to be absolutely sure of the technology that I use in case I make this application available to the public for example. I want to do things properly, in order to avoid maintainability or security problems.
Summary: With the things I already have (webservice / api), is it a good idea to use a client framework like Angular or should I stay on big server-side framework like Symfony/Zend etc? Knowing that I position myself in the context in which this platform would be massively used (craiglist comparable traffic).
I'd say - depends whether you want to be more frontend guy or backend guy in future. If you want to be full stack developer then it doesn't apply.
In my opinion, both Symfony/Zend or other big server-side frameworks aren't so exciting as dynamic frontend JavaScript frameworks like Ember/Angular/React.
Also, if you have already RESTful API and OAuth authentication implemented in backend part I'd go with Ember. Why? Ember Data is great tool for talking to backend API. It's mature, it lazily loads records when they're needed and it's very customizable.
it's fully instantiated in the browser,so the user can change
absolutely all code and data I provide...
Ember has built in security like sanitizing data which is rendered in it's templating language - HTMLBars. Also, there's CORS and content security policy (CSP) standard which is implemented in Ember.
I want to be absolutely sure of the technology that I use in case I
make this application available to the public for example. I want to
do things properly, in order to avoid maintainability or security
problems .
In Ember you can create mature, secure, production-ready applications, but you need to comfortable with your Ember skills to some degree to build such ambitious web application, but it's part of building every application.
With the things that i already have(webservice / api), is it a good
idea to use a client framework like Angular?
Yes, it's very popular solution to use MEAN stack or go with Ember + RESTful API.
Why should I choose Ember instead of Angular (which have a larger
community/tutorials/answered questions) ?
Angular has larger community/tutorials/answered questions, but when I started some side project with Angular to learn its possible advantages over Ember, I was surprised how there was no consensus in it's community for doing one thing. So, instead of fast search how to declare and use directives (I think it was the thing that confused me) I have to do another research which way is the best. Also, there are lots of ways to setup project (where to put custom directives, different Angular objects) and you have to do another research which one to choose. I ended up using repo healthy-gulp-angular as my template, but you can see it hasn't been updated for 8 months, but I think during these 8 months Angular had a lot of changes and I'm not sure if this repo is the best choice.
In Ember you have Ember CLI tool which is built with Convention over Configuration principle. You have also Ember Data which utilizes JSON API standard - if you don't have JSON API compliant server side right now, you can write custom adapter to normalize server responses or change how backend replies. In Ember you don't have all that headache and different best solutions to do 1 basic thing depending who you ask.
What means "Single page application" ?
Single-page application is basically a page which doesn't have to reload all assets and HTML when you navigate. It's advantage over PHP - when user moves to another location he downloads only new data for that route. More info here.
Does those frameworks allow me to create real routes ? (
www.myapp/profil/userid etc )
Yes, of course. You don't even need # in your URL. With simple rewrite rule and small amount of logic for profile route and specified path profile/:userid, when user will open URL www.myapp/profile/userid he will be automatically taken to profile route, and userid would be interpreted as route parameter, so you can take this userid and find user record from the store in model hook.
Client = speed, Server = stability
JS frameworks updates once per week
Non-Js back-end once per year
Client side depends to behavior depending on browser
Back is related only on machine but not on environment
I chose FE coz I tired to debug code by writing variables values to database to actually see what is going on in controllers -_-

Single Page Application - Frontend independent of backend?

I've done some research and I've noticed that in a lot of examples Symfony2/AngularJS apps the frontend and backend are combined; for example, views use Twig.
I'd always thought that it's possible (and common practice) to create the frontend and backend separately and just join them by API. In that case if I want to change a PHP framework I will can do it without any problems and it will be enough to keep API.
So what are the best practices for doing it? It would be great if you could explain it to me and even greater if you just give me a link to good example on github or something.
We have been developing some projects using the same approach. Not only I think it doesn't have any "side effect", but the solution is very elegant too.
We usually create the backend in Node.js, and it is just an API server (not necessarily entirely REST-compliant). We then create another, separate web application for the frontend, written entirely in HTML5/JavaScript (with or without Angular.js). The API server never returns any HTML, just JSON! Not even an index structure.
There are lots of benefits:
The code is very clean and elegant. Communication between the frontend and the backend follow standardized methods. The server exposes some API's, and the client can use them freely.
It makes it easier to have different teams for the frontend and the backend, and they can work quite freely without interfering with each other. Designers, which usually have limited coding skills, appreciate this too.
The frontend is just a static HTML5 app, so it can (and we often did) easily be hosted on a CDN. This means that your servers will never have to worry about static contents at all, and their load is reduced, saving you money. Users are happier too, as CDNs are usually very fast for them.
Some hints that I can give you based on our experience:
The biggest issue is with authentication of users. It's not particularly complicated, but you may want to implement authentication using for example protocols like OAuth 2.0 for your internal use. So, the frontend app will act as a OAuth client, and obtains an auth token from the backend. You may also want to consider moving the authentication server (with OAuth) on another separate resource from the API server.
If you host the webapp on a different hostname (e.g. a CDN) you may need to deal with CORS, and maybe JSONP.
The language you write the backend in is not really important. We have done that in PHP (including Laravel), even though we got the best results with using Node.js. For Node.js, we published our boilerplate on GitHub, based on RestifyJS
I asked some questions in the past you may be interested in:
Web service and API: "chicken or egg"
Security of an API server: login with password and sessions

Single-Page Play Application

I've just read about single-page web applications that expose a RESTful interface for retrieving the data - for example in JSON format, and that just provide a single HTML page referencing the Javascript file responsible for invoking the RESTful interface and building the web user interface dynamically in the client's web browser.
To implement this in Play, one should implement the controllers so that they return JSON instead of HTLM and implement some CoffeScript for rendering the user interface on the client side.
So far so good... but I'm wondering whether this design makes sense for large web applications since the amount of javascript code to be run on the client side would increase more and more.
My initial idea was to implement the web application using Play's template engine and then to provide a RESTful interface for Mobile apps.
Any suggestion, idea, or link to documentation that covers this topic would be really appreciated ;-)
The Play for Scala book has a chapter on this topic. They use a single view as an entry point, that's it.
As for large applications, that's a valid concern. For that you might want to use libraries such as RequireJS (which Play 2.1 has built-in support for), among others. You also might want to split your app into sub-modules to manage complexity. On the client side, you probably should use a framework, too, such as AngularJS.
Concerning Play there's not much left to say, it's a very good platform to expose RESTful JSON services. I recommend you take a look at the JSON documentation and also check out ReactiveMongo.
Providing a common REST API should work fine. At the moment I am working with a Play 2.0 server app for browser (Backbone etc) and iOS clients. The browser client is totally separate from the Play app and deployed independently.
I think there is some initial overhead compared to Play template approach but having just one set of controllers to test etc makes life easier.
Couple points to consider:
Client authentication. Preferably you would use the same way for all the clients.
At some point you might want to introduce some specialized REST API for one of the clients in order to save bandwith and number of requests. For example mobile landing screen is a typical candidate.
You need to document your REST APIs more detailed as the web client devs are not sharing the codebase.

Performance considerations with Facebook C# SDK versus Javascript SDK

I'm starting a new Facebook canvas application so I can pick the technology I'm going to use. I've always been a fan of the .NET platform so I'm strongly considering it for this app. I think the work done in:
facebooksdk.codeplex.com
looks very promising. But my question is the following:
It's my understanding that when using an app framework like this (or PHP for that matter) with Facebook, whenever we have a call into the API to do some action (say post to the stream), the flow would be the following:
-User initiates request which is direceted to ASP.NET server
-ASP.NET server makes Facebook API call
so a total of three machines are involved.
Why wouldn't one use the Javascript SDK instead?
http://developers.facebook.com/docs/reference/javascript/FB.api
"Server-side calls are available via the JavaScript SDK that allow you to build rich applications that can make API calls against the Facebook servers directly from the user's browser. This can improve performance in many scenarios, as compared to making all calls from your server. It can also help reduce, or eliminate the need to proxy the requests thru your own servers, freeing them to do other things."
So as I see it, I'd be taking my ASP.NET server out of the equation, reducing the number of machines involved from three to two. My server is under less load and the user (likely) gets fatter performance.
Am I correct that using the Facebook C# SDK, we have this three machine scenario instead of the two machine scenario of the JS API?
Now I do understand that a web server framework like ASP.NET offers great benefits like great development tools, infrastructure for postbacks, etc, but do I have an incomplete picture here? Would it make sense to use the C# framework but still rely on the javascript sdk for most of the FB api calls? When should one use each?
Best,
-Ben
You should absolutely use the Javascript SDK when you can. You are going to get a lot better performance and your app will be more scalable. However, performance isn't always the only consideration. Some things are just easier on the server. Also, a lot of apps do offline (or delayed processing) of user data that doesn't involve direct interaction.
I don't think that there is a right or wrong place to use each SDK, but they definitely both have their place in a well built Facebook app. My advice would just be to use whichever is easier for each task. As your app grows you are going to learn where the bottlenecks are and where you need to really squeeze that extra bit of performance is needed by either moving stuff to the client (Javascript SDK) or moving stuff to be processed in the background (Facebook C# SDK).
Generally, we use the Javascript SDK for some authentication stuff and for most of the stuff with the user interface. The one exception to the UI stuff is when we are really concerned about handling errors. It is a lot easier to handler errors on the server than with the Javascript SDK. Errors I am talking about are things like errors from facebook or just general facebook downtime.
Like I said, in the beginning just use both and do whatever is easier for each task.

Categories

Resources