Difference between WebApp that connects to API vs backend rendering - javascript

Sometimes when I create basic web tools, I will start with a nodeJS backend, typically creating an API server with ExpressJS. When certain routes are hit, the server responds by rendering the HTML from EJS using the live state of the connection and then sends it over to the browser.
This app will typically expose a directory for the public static resources and will serve those as well. I imagine this creates a lot of overhead for this form of web app, but I'm not sure.
Other times I will start with an API (which could be the exact same nodeJS structure, with no HTML rendering, just state management and API exposure) and I will build an Angular2 or other HTML web page that will connect to the API, load in information on load, and populate the data in the page.
These pages tend to rely on a lot of AJAX calls and jQuery in order to refresh angular components after a bunch of async callbacks get triggered. In this structure, I'll use a web server like Apache to serve all the files and define the routes, and the JS in the web pages will do the rest.
What are the overall strengths and weaknesses of both? And why should I use one strategy versus the other? Are they both viable and dependent upon scale and use? I imagine horizontal scaling with load balancers could work in both situations.

There is no good or bad approach you could choose. Each of the approaches you described above have some advantages and you need to decide which one suits best to your project.
Some points that you might consider:
Server-side processing
Security - You dont have to expose sensitive information (API tokens, logins etc).
More control - You will have more control over what you do with your resources
"Better" client support - Some clients (IE) do not support same things as the others. Rendering HTML on the server rather than manipulating it on client will give you more support for clients.
It can be simpler to pre-render your resources on server rather than dealing with asynchronous approach on client.
SEO, social sharing etc. - How your server sends resources, thats how bots see them. If you pre-render everything on the server bot will be able to scrape your site, tag it etc. If you do it on the client, it will just see non-processed page. That being said, there are ways to work around that.
Client-side processing
Waiting times. Doing stuff on the client-side will improve your load times. But be careful not to do too many things since JS is single-threaded and heavy stuff will block your UI.
CDN - you can serve static resources (HTML, CSS, JS etc) from CDN which will be much faster than serving them from your server app directly
Testing - It is easy to mock backend server when testing your UI.
Client is a front-end for particular application/device etc. The more logic you put into client, the more code you will have to replicate across different clients. Therefore if you plan to have mobile app, it will be better to have collection of APIs to call rather than including your logic in the client.
Security - Whatever runs on the client can be fully read by the client. No matter how much you minify, compress, encrypt everything a resourceful person will always be able to do whatever he wants with your code
I did not mark pro/con on each point on purpose because it is up to you to decide which it is.
This list could go on and on, I didn't want to think about more points because it is very subjective, and in the end it depends on the developer and the application.
I personally tend to choose "client making ajax requests" approach or blend of both - pre-render something on the server and client takes care of rest. Be careful with the latter though as it will break your automated tests, IDE integration etc. if not implemented correctly.
Last note - You should always do crucial validations on the server. Never rely on data from client.

Related

Issue with Adblockers on server-side GTM

I am using Server side GTM, but I am facing adblocking issues when calling the below request when I want to retrieve the gtm?js file:
https://example.gtmdomain.com/gtm.js?id=GTM-MY_GTM_ID
The request works fine when I don't use adblockers.
Is there a way to rename the endpoint to something else, such as https://example.gtmdomain.com/secret_file_name.js?id=GTM-MY_GTM_ID in order to not be blocked by adblockers?
So. Server-side gtm is exactly what it says. It's executed on the server. It listens for network requests. It doesn't have any exposure to what happens on the front-end. And the front-end has no clue about there being a server-side GTM. Well, unless there are explicit calls to its endpoint, which you can proxy with your backend mirrors when needed.
What you experience is adblockers blocking your front-end GTM container. Even though it's theoretically possible to track all you need, including front-end events with server-side GTM, it's considered to be the best practice to use both GTMs and stream front-end events to back-end GTM through front-end GTM.
This, of course, makes you dependant on adblockers since they will block your front-end GTM. A way to avoid it is... Well, not to use the front-end GTM and have all your tracking implemented either in a tag manager that is not blocked (I doubt there is one) or just have your own custom javascript library doing all the front-end tracking and sending it to the backend GTM to be properly processed and distributed.
Generally, it's too expensive to implement tracking with no TMS, since now you really have to know your JS, so only the cool kids can afford to do this. A good example would be Amazon.
Basically, it would cost about two to five times more (depending on particulars) to implement tracking with no TMS, but adblockers typically cut only about 10% of traffic. 10% is not vital for reporting, measuring effectiveness of funnels and what not. All the critically important data is not being reported on by analytics anyway. Backend is the real source of critical data.
You can easily do this if you use sGTM hosting from https://stape.io
There is a feature called Custom Loader. With it, you can download Web GTM from different paths and all other related scripts will be also downloaded from different URLs, for example, gtag.js for GA4.
More info https://stape.io/blog/avoiding-google-tag-manager-blocking-by-adblockers
You can also create your custom loader client for Web GTM. However, there will be problems with related scripts. UA/GA4 still will be blocked then, but GTM itself not.
So, I finally implemented a great solution using GTM client templates. It works like a charm.
To summarize, the steps are:
Create a client template from your server container. You can import this template from https://raw.githubusercontent.com/gtm-templates-simo-ahava/gtm-loader/main/template.tpl
Create a new client from this client template
name your path as you want
This article explains perfectly the required steps: https://www.simoahava.com/analytics/custom-gtm-loader-server-side-tagging/

why is SSR faster than SPA and vica versa?

I've been reading a lot about SPA vs SSR and maybe I understand its real idea or maybe not. I'd really appreciate someone experienced who can tell my if my assumptions are right or mean something.
Observation 1)
SPA - client requests www.example.com, this goes from browser to server. Server returns index.html which has just <div id="app"></div> and the script source for javascript file. Browser again makes another request for that bundled js file, server returns it. After that returned js file kicks in and starts executing. When compiled and done, the page is shown to the user.
SSR - client requests www.example.com from browser to server. Server does all the things, making any api calls or other things, puts everything in html and returns html. if these html has some styles or other js sources, browser will make requests for these too.
What I think - Why is SSR faster? Is it because in the SPA case, it had to download the whole js file of the whole website ? and in the SSR case, only the content of that specific page user is entering gets returned ?
Observation 2)
SPA - if the page got loaded and user clicks on one of other routes, it won't make any request to the server to get the html to show it to the user. all the route's js are already downloaded so no need to make request to the server unless there's an Ajax call for some dynamic data from database.
SSR - this would again make the request to the server to get the html file for the new page.
What I think SPA is faster in this case, even though SPA will still need to make the ajax request for some data. Ajax request for some data seem to be faster than the request to download the newly rendered html which would also need that ajax call on the server.
I know SSR is good for SEO, but i am only interested about performance. What do you think? Everything correct about what I said?
SPA
Load once, navigate freely. Sounds great, in theory at least. The initial load time can be a burden. If you have 6 different screens in your application, each taking .5s to load. Would you rather wait .5s everytime you get to a new one, or the initial 3s ? For most user any long loading time is unacceptable, so it's often better to load things incrementally.
On top of that, you often have the burden of actually running the framework to create pages inside the client browser for most modern JS framework (angular, vuejs, reactjs, etc...). That can create performance issues in some cases.
SSR
Generate everything server side, serve content dynamically. Sounds better for code splitting and letting the user load only what he needs. On top of that you can run your framework on a powerful server rather than the client computer/phone/fridge.
However, as you stated, you need to request more content more often. To avoid that, most modern framework are able to create partial routes, dynamically loading page fragments inside a fixed layout if only part of the page need to update when routing.
But that's not all, introducing
Service worker
Service workers and SSR are working very well together. If you establish a cache first strategy, that means that every time your user goes from page A to B to A again, you'll load applications fragment for A and B only once. You service worker will recognize that you are using the same fragment again and pull it from the user cache rather than making a new network request.
It makes things feel lightning fast.
On top of that you can also preload content. A user if hovering a link ? Start loading the app fragments used in this route. It might only save a few hundred ms, but when you are loading small apps fragments it can feel instantaneous for your user.
What's the catch then ? Well first of all it can actually be worse if you have a static application. Cache exists also for SPA, and if your app is small enough you might not care about the few ms saved during the initial loading time. Even with a larger application, it's mostly dependent on your user base.
But mostly, SSR requires an actual server to run the app. A lot of hosting services, like firebase or aws, allow you to host static files like you could use for a PWA, as well as handle a database easily from the client side, for a really cheap, if not free, cost. SSR will require a server, so you will have an increased running cost as those services.
In regards to your second observation, an SPA doesn't have to download everything up front . Through dynamic import an SPA can download assets on demand when a user switches route. This is an optimization to prevent the overhead of downloading everything up front.
As far as SSR vs SPA, SSR generally gives a better user experience. The server can send ready to be rendered HTML through SSR, so the application is mostly making dispatches and rendering the results with minimal client-side logic. If an SPA is designed with a framework, most assets downloaded will have to be processed prior to rendering. However, an SPA can be designed such that the minimal amount of code is provided on initial load and then all remaining assets are downloaded in the background.
You can do an awful lot with an SPA, but having the ability to offload processing to a dedicated server is almost always a win. However, if you are worried about user bandwidth or if you are developing a mobile application then an SPA might be worth considering. An SSR app can still do well here, but if you can get the assets onto the user device up front then you should have less latency-related issues.
Again everything comes down to design decisions, but from my experience an SSR application is more performant and scalable.
SSR provides the end user with a perceived performance improvement. If you are using SSR and SPA together, then SSR will give the end user something to look at quickly until the SPA is booted and the page is ready to be interacted with.

Completely splitting the front and backend in web application

The backend(server) only provide data through api(s), all the page rendering will be done by the front through javascript(ajax).
Generally I think this is a good idea to support multiple clients, for example both the android,iOS,browser can use the same api. Also people may have more clear responsibilities, one can focused on the server or the front only.
However there are some problems I can find at the moment:
1 Reusable layout.
For pages rendered by the backend, we can use some template(layout) libraries like apache tiles to make a uniform look and feel for the while application.
While for the client rendering, none of the ejs,handlebars can fit this.
2 Authentication
Some operations maybe only open for authenticated user, normally we use:
<c:if test="${session.user}!=null">
// put the security operations
</c:if>
While for client rendering, we will have to send ajax to server to make sure the user is authenticated, and then refresh the view, which may cause the rendering steps more complicated.
3 URL parameters parsing
When the pages are rendered by the server, we can get the url parameters easily, for example the #Param #PathVariable. which means we can create any kinds of urls.
While for client rendering, we will have to build a common libraries to parse the parameters ourselves.
4 ...
I think there maybe more problems I have not mentioned yet.
Then I wonder if this kind of design desirable? If yes, are there any common practices we can follow ?
Most modern frameworks (ZK, meteor, django) go this way. Instead of putting everything into one monolith, all data services are implemented as REST or HTTP services which return JSON. The UI just knows when to make which call and is implemented in JavaScript.
Uniform Look&Feel is not something you aim for; most platforms are slightly different and users expect these differences (Apple behaves different than Android or Windows).
As for authentication, you still need to login to the services (most APIs expect a cookie or they won't work). So that doesn't change.

How to combine JMVC (javascript-mvc) and server side MVC together

I have asked this quesion few days before here and no one answered it.
I had asked it in forum.javascriptMVC.com too and now I have a answer, however I need a a bit more idea.
Question:
I read javascriptMVC's documents and I loved it.
But I don't know how to use it in a large scale project.
I think on the server-side a MVC framework is needed or can help so much. And I've worked with server side PHP frameworks.
I am confused, my understanding of JavascriptMVC projects is that they handle client side events on the browser capture events, execute AJAX requests, manage the responses/data from the server and also show them to user in a graphical interface.
I know that in PHP MVC projects we also have controllers (and actions) that any of them is a separate page with a single entry point, my point is that these pages are whole HTTP requests.
I think the combination of these two frameworks would be in a form of a single or few heavy files (including js , css , imgs etc) that loads and managed by another Javascript libary such as steal.js.
Now user can work with site and its actions (as events) that result in running js functions that may change something in the UI or cause a AJAX request, as in Yahoo Mail where most things happens in one page.
So how will this affect the design of controllers and actions in PHP ? I mean normally in PHP MVC frameworks a lot of controllers and actions means a lot of pages. I think because of AJAX the number of controllers and actions should be actually less. I also think because of JMVC most controllers (and actions) should turn to AJAX responders, however how are layouts and views to be handled in this context?
Finally
I want to know about different aspects of using this method(JMVC+MVC). (I am using Yii as my server-side MVC framework and JavascriptMVC as my client-side MVC).
I also want to know about management of data on the client-side.
I would like to understand how AJAX and web-sockets could be used, where we can use AJAX and where we can use websockets?.
I want to understand about local-storage how we can use it for simulated page data management and maybe caching, how we can cache data coming from server as JSON in a form of a page? I am working on a very large project and I want to build its foundation very strong.
Say that you jave a JMVC framework where
The model gets data from the server using AJAX request - expecting JSON results.
The view does not rely on the server, for more that providing the raw HTML.
The Controllers do not rely on the server, for more that serving the JS files.
Essentially you use the server what it "should" be used for, data storage and processing, while you let your client browser handle all the tedious stuff.
Now, lets see how to define a server-side framework. As I see it we have several options, all of them fairly similar, albeit somewhat different (all returning someing in JSON format):
A fully fledged MVC such as cakePHP
Custom implementation
WebService implementation
Personally I would use a WebService, and I already have. Or rather, I used a WebSocket based JSON-RPC WebService.
Using a fully fledged MVC will have it's drawbacks in maintainability and, not insignificantly, server load. But it is very possible, just implement a view which formats the page as JSON...
Making a JMVC client does not, in my view, mean that it cannot request new HTML from the server. But it does mean that that requested HTML should be free of data, other than meta-data the Java-View needs to know where to put data recieved from, for example, the WebService.
So a main page in the JMVC could just contain a single
<div id=content></div>
and menu clicks can fetch a subpage from the server and load the content into content. Then that loaded content can contain some more javascript which starts WebService requests to get data from the server, to display in the empty placefolders it in turn contains.
First check https://stackoverflow.com/a/4458566/718224 after that you can move forward.
Try this (from https://stackoverflow.com/a/8424768/718224)
No, you don't need to use it server-side, but it will help with organization / separation of application and business logic. Depending on the scale of your application, that could help tremendously in the future.
The key is just making sure you organize your backend code well, otherwise you will end up with a monolithic and/or difficult-to-maintain codebase.
Server-side views would contain your HTML and any JavaScript that may or may not make requests to the server. This assumes that you are actually using PHP to build the pages that a user navigates to.
If you have a static html page that builds itself using AJAX requests, then you may not need to use server-side views at all. Your controllers would more than likely be outputting JSON data. If this is the case, it doesn't make models and controllers any less useful.
Try this (from https://stackoverflow.com/a/8424760/718224)
If you are using any of the major PHP frameworks (CakePHP, Code Igniter, Symfony, etc.) then you ARE using MVC already. If your server side logic is more complex than just a few really simple scripts than you probably should be using one of those frameworks listed, using MVC on the server and the client.
Many (most?) larger web apps being built today are moving towards using an MVC framework for both client-side and server-side application code. It's a fantastic pattern for separating concerns for many large applications, especially request/response server apps and event-driven browser apps.
Try this (from https://stackoverflow.com/a/8427063/718224)
Backbone.js connects your application over a RESTful JSON interface. I honestly find that it works wonderfully in conjunction with the MVC framework. If you build a RESTful API, you can let you server manage CRUD updates quite easily. All your server side code will be responsible is saving and sending back JSON objects to Backbone.js. Then let most of your logic and magic happen within the Backbone.js framework.
Try this (from https://stackoverflow.com/a/8078282/718224)
First, a client-side MVC framework like Backbone isn't just for single-paged apps. You can also use it to add some rich interaction to one or many views of a more traditional app. They simply provide structure and data abstractions on the client.
Next, these client-side frameworks are designed specifically to work with your back-end MVC frameworks. Backbone.js (since you tagged it specifically) models and collections work with REST services. They will talk via GET/POST/PUT/DELETE verbs and will ultimately communicate with your controllers on the back-end when they make asynchronous requests.
In the case of Backbone, it talks JSON instead of HTML. In the case of Rails, this is really easily handled in the controller. If the request is an HTML one, then you return a view as HTML. If it is a JSON request (*.json or Content-type) then the controller returns a JSON representation of the data. I am assuming that it is as easy in Django as it is in Rails to have the same controller respond to multiple content requests (HTML, XML, JSON, etc)
may this help you.
client side web apps and rich client side web pages often use jmvc backbones and etc. and with that kind if js libraries and HTML5 technologies like webstorage you can have a more applocation like websites that all things happens in client side like templating management and etc. and just we have ajax request/response to servers to get/set data or update status. and abput first section they are right a jmvc site are more like a single pages websites. ie hotmail yahoo , etc.

How to create temporary files on the client machine, from Web Application?

I am creating a Web Application using JSP, Struts, EJB and Servlets. The Application is a combined CRM and Accounting Package so the Database size is very huge. So, in order to make Execution faster, I want prevent round trips to the Database.
For that purpose, what I want to do is create some temporary XML files on the client Machine and use them whenever required. How can I do this, as Javascript do not permits me to do so. Is there any way of doing this? Or, is there any other solution which I can adopt in order to make my application Faster?
You do not have unfettered access to the client file system to create a temporary file on the client. The browser sandbox prevents this for very good reasons.
What you can do, perhaps, is make some creative use of caching in the browser. jQuery's data method is an example of this. TIBCO General Interface makes extensive use of a browser cache for XML data. Their code is open source and you could take a look to see how they've implemented their browser cache.
If the database is large and you are attempting to store large files, the browser is likely not going to be a great place for that data. If, however, the information you want to store is fairly small, using an in-browser cache may accomplish what you'd like.
You should be caching on the web server.
As you've no doubt realised by now, there is a very limited set of things you can do on the client machine from a web app (eg, write cookie).
You can make your application use the browser plugin Google Gears, that allows you a real clientside storage.
Apart from that, remember, there is a huge overhead for every single request, if needed you can easily stack a few 100 kB in one response, but far away users might only be able to execute a few requests per second. Try to keep the number of requests down, even if it means adding overhead in form of more data.
#justkt Actually, there is no good reason to not allow a web application to store data. Indeed HTML5 specifications include a database similar to the one offered by Google Gears, browser support is just a bit too sporadic for relying on that feature.
If you absolutely want to cache it on the client, you can create the file on your server and make your web app retrieve it. This way the browser will fetch it and keep it on the client cache.
But keep in mind that this could be a pain for the client if the file is large enough.

Categories

Resources