How does scalability match up in these two architectures? (For a web app)
I.E I can see that for MVVM javascript, we have;
Advantages:
-A share of the logical processing required, is executed on the client's browser. So for n requests,
the client browser's take this load from the server n times. (E.G iterating through collections to output HTML)
Disadvantages:
-More requests per user. I.E For the initial dynamic HTML, in Traditional, we have 1 request per user, but in MVVM we may have up to 5 for initial HTML. I.E,
Request 1, use get's initial HTML
Request 2-5, Knockout clients, requests's JSON data, so it can setup lists, and dynamic HTML etc.
No doubt these JSON requests, can be Asyncronous actions, but even so, how badly will this effect load?
When you build a SPA-style app, the advantage is that once the initial page is loaded, the next requests will be smaller requests than usual. After all, you will only be requesting data as opposed to HTML + data.
In terms of the effects on server load, it'll depend on your app. If the bottleneck is in data processing (fetching from db, domain logic, ...) either approach will have more or less the same load, since you have to process the data anyway.
If on the other hand the bottleneck is in the rendering, the client-side approach will be more beneficial since the rendering will be done on the client.
Related
Sometimes when I create basic web tools, I will start with a nodeJS backend, typically creating an API server with ExpressJS. When certain routes are hit, the server responds by rendering the HTML from EJS using the live state of the connection and then sends it over to the browser.
This app will typically expose a directory for the public static resources and will serve those as well. I imagine this creates a lot of overhead for this form of web app, but I'm not sure.
Other times I will start with an API (which could be the exact same nodeJS structure, with no HTML rendering, just state management and API exposure) and I will build an Angular2 or other HTML web page that will connect to the API, load in information on load, and populate the data in the page.
These pages tend to rely on a lot of AJAX calls and jQuery in order to refresh angular components after a bunch of async callbacks get triggered. In this structure, I'll use a web server like Apache to serve all the files and define the routes, and the JS in the web pages will do the rest.
What are the overall strengths and weaknesses of both? And why should I use one strategy versus the other? Are they both viable and dependent upon scale and use? I imagine horizontal scaling with load balancers could work in both situations.
There is no good or bad approach you could choose. Each of the approaches you described above have some advantages and you need to decide which one suits best to your project.
Some points that you might consider:
Server-side processing
Security - You dont have to expose sensitive information (API tokens, logins etc).
More control - You will have more control over what you do with your resources
"Better" client support - Some clients (IE) do not support same things as the others. Rendering HTML on the server rather than manipulating it on client will give you more support for clients.
It can be simpler to pre-render your resources on server rather than dealing with asynchronous approach on client.
SEO, social sharing etc. - How your server sends resources, thats how bots see them. If you pre-render everything on the server bot will be able to scrape your site, tag it etc. If you do it on the client, it will just see non-processed page. That being said, there are ways to work around that.
Client-side processing
Waiting times. Doing stuff on the client-side will improve your load times. But be careful not to do too many things since JS is single-threaded and heavy stuff will block your UI.
CDN - you can serve static resources (HTML, CSS, JS etc) from CDN which will be much faster than serving them from your server app directly
Testing - It is easy to mock backend server when testing your UI.
Client is a front-end for particular application/device etc. The more logic you put into client, the more code you will have to replicate across different clients. Therefore if you plan to have mobile app, it will be better to have collection of APIs to call rather than including your logic in the client.
Security - Whatever runs on the client can be fully read by the client. No matter how much you minify, compress, encrypt everything a resourceful person will always be able to do whatever he wants with your code
I did not mark pro/con on each point on purpose because it is up to you to decide which it is.
This list could go on and on, I didn't want to think about more points because it is very subjective, and in the end it depends on the developer and the application.
I personally tend to choose "client making ajax requests" approach or blend of both - pre-render something on the server and client takes care of rest. Be careful with the latter though as it will break your automated tests, IDE integration etc. if not implemented correctly.
Last note - You should always do crucial validations on the server. Never rely on data from client.
I've inherited a site that uses knockout js and asp.net. The site runs decent after everything has been loaded but the initial load leaves a lot to be desired. Digging through the code there are around 20 models, each one calls an ajax method to get data from the server on page load. There is quite a bit of data being queried from the db which is causing the performance issue as the server sends the js, then the client sends and receives a large amount of data over 20 methods.
I want to handle all of the queries on server side before I send it to the client side, and then load the js models from that data. I am thinking about posting this data in a hidden div on the page as JSON and loading the models from there instead of an ajax call.
My question is, is this best practice? Is there a better way to optimize this scenario?
If you inline the data from the 20 queries in the page response, then the page response time can be significantly prolonged. It will results in the browser having to sit and wait from the previous page or on a boring blank page.
However if you keep the solution as-is then the user will get the page initially much faster, and the data will pop-in when it is ready.
Although the total load time is probably going to be better with the data inlined, the perceived performance from the users perspective is going to be worse. Here is a nice post on the subject: http://www.lukew.com/ff/entry.asp?1797
Another benefit is that you don't have a weakest-link problem in that the page response time will be that of the slowest query. That will be quite severe in query timeout conditions.
Be also aware of issues if one query fails, then you must still inline the successful queries, and also handle the failed query.
I would argue that it is much better to do the queries from the browser.
There are some techniques to consider if you want to have the 20 queries executed more efficiently. Consider using something like SignalR to send all queries in a single connection and having the results also stream back in a single connection. I've used this technique previously with great success, it also enabled me to stream back cached results (from server-side cache) before the up-to-date results from a slow backend service was returned.
For my Web apps I'm always wondering, which way is the best to design a proper Web applications with data persistance. For now I design every time a single HTML page, and all the content and the data upload is managed with jQuery AJAX requests based on a RESTful model, to a remote server which takes care of the database. But at the end that make sometimes a lot of AJAX calls, and getting huge amount of data takes sometimes a few seconds, which is not user-friendly.
Is there something like a guideline, or a standard way of developing to design web App ?
I've already looked over the WebWorkers and WebSockets Javascript API, but never used them yet. Does anybody already try it ? Does that allows better performance than AJAX exchanges ?
What is your way of Web App developing ?
This isn't really the place for questions like this, but I will give you a few brief pointers.
AJAX requests shouldn't take long, if they are consistently being slow then the problem is most likely your server-side code and inefficiencies there. Websockets aren't going to give you any benefit over AJAX if your server is slow.
A common design is to load the minimal dataset required for the page to function, AJAXing any other required data to get the page responsive as quickly as possible.
Caching and pre-fetching are great ways to speed up your site, for instance: if you are running a mysql query over and over, run it once and put the results in a caching service like memcached or mongodb with an expiration of an hour (or something) and serve the cached response, this will speed up your server response times. Pre-fetching is anticipating what your user is going to do next and loading that data in the background without any user interaction.
Consider using localStorage or IndexedDB if your users are loading the same data repeatedly
Suppose you were to build a highly functional single-page client-side application that listens to URL changes in order to navigate around the application.
Suppose then, that when a user (or search engine bot) loads a page by its url, instead of delivering the static javascript file and hits the api as normal, we'd like to precompute everything server-side and delivery the DOM along with the js state.
I am wondering if there are existing tools or techniques for persisting such an execution of state to the client.
I know that I could execute the script in something like phantom JS and output the DOM elements, but then event handlers, controllers and the js memory state would not be attached properly. I could sniff our user agent and only send the precomputed content to bots, but I am afraid google would punish for this, and we also lose the speed benefits of having sent everything precomputed in the first place.
So you want to compile, server-side and send to the client the results of requesting a resource at a specific URL? What is your backend written in?
We have an API running on GAE in Java. Our app is a single-page app, and we use the HTML5 history object so we have to have "real responses" for actual URLs on the front-end.
To handle this we use JSP to pre-cache the data in the page as it's loaded from the server and sent to the client.
On the front end we use Backbone, so we modified Backbone.sync to look for a copy of the data it's looking for locally on the page and if it's not there, only then to request it from the server as an AJAX call.
So, yes, this is pretty much what every site did before you had ajax. The trick is writing your app so that the data can be local in the page (or in localStorage even) and if not only then to request the data. Then make sure your page is "built" on the server end (so we actually populate the data in the HTML elements on the server end so the page doesn't require JS on the client end).
If you go somewhere else the data is dynamic and the page doesn't reload.
What is it best to handle pagination? Server side or doing it dynamically using javascript?
I'm working on a project which is heavy on the ajax and pulling in data dynamically, so I've been working on a javascript pagination system that uses the dom - but I'm starting to think it would be better to handle it all server side.
What are everyone's thoughts?
The right answer depends on your priorities and the size of the data set to be paginated.
Server side pagination is best for:
Large data set
Faster initial page load
Accessibility for those not running javascript
Client side pagination is best for:
Small data set
Faster subsequent page loads
So if you're paginating for primarily cosmetic reasons, it makes more sense to handle it client side. And if you're paginating to reduce initial load time, server side is the obvious choice.
Of course, client side's advantage on subsequent page load times diminishes if you utilize Ajax to load subsequent pages.
Doing it on client side will make your user download all the data at first which might not be needed, and will remove the primary benefit of pagination.
The best way to do so for such kind of AJAX apps is to make AJAX call the server for next page and add update the current page using client side script.
If you have large pages and a large number of pages you are better of requesting pages in chunks from the server via AJAX. So let the server do the pagination, based of your request URL.
You can also pre-fetch the next few pages the user will likely view to make the interface seem more responsive.
If there are only few pages, grabbing it all up-front and paginating on the client may be a better choice.
Even with small data sizes the best choice would be server side pagination. You will not have to worry later if your web application scales further.
And for larger data sizes the answer is obvious.
Server side - send to the client just enough content for the current view.
In a practical world of limits, I would page on the server side to conserve all the resources associated with sending the data. Also, the server needs to protect itself from a malicious/malfunctioning client asking for a HUGE page.
Once that code is happily chugging along, I would add "smarts" to the client to get the "next" and "previous" page and hold that in memory. When the user pages to the next page, update your cache.
If the client software does this sort of page caching, do consider how fast your data ages (is likely to change) and if you should check that your cached page of data is still valid. Maybe re-request it if it ages more than 2 minutes. Maybe have a "dirty" flag in it. Something like that. Hope you find this helpful. :)
Do you mean that your JavaScript has all the data in memory, and shows one page a time? Or that it downloads each page from the server as it's needed, using AJAX?
If it's the latter, you also may need to think about sorting. If you sort using JavaScript, you'll only be able to sort one page at a time, which doesn't make much sense. So your sorting should be done on the server.
I prefer server side pagination. However, when implementing it, you need to make sure that you're optimizing your SQL properly. For instance, I believe in MySQL, if you use the LIMIT option it doesn't use the index so you need to rewrite your sql to use the index properly.
G-Man
One other thing to point out here is that very rarely will you be limited to simply paging through a raw dataset.
You might have to search for certain terms in one or more columns you are displaying, and then say sort on a few columns and then give the users the ability to page through this filtered dataset.
In a situation like this you might have to see whether it would be better to have this logic search and/or sort client side or server side.
Another thing to consider is that Amazon's cloud search api gives you some very powerful searching abilities and obviously you'll want to allow cloud search to handle searching and sorting for you if you happen to have your data hosted there.