Node Packages vs Browser ones - javascript

For example, packages like highlight.js works in node just like in browser. What is considered best practice/faster/ideal?
In this case, highlight.js beautifies a <code> tag with color schemes. Example: In a blog where you use it, there are 2 cases:
Fetch post, show post to user and let the browser/client version
beautify the code, or
Fetch post, pass the contents to the highlight
node function, and show the entire results to the user.
My concerns:
Free up server stress. Show website earlier, since it doesn't need to
parse any data.
Avoid browser incompatibility (not a big deal tbh).
Save some static requests if not using CDN. Maybe faster?
I don't know what else I'm missing or what should be considered. What do you think?
PD: Every day more packages are browser/node compatible, but I think this is the best example I can provide.

The answer to that question can vary, but I would prefer to do it on the client side. Here are some pros and cons of the client-side route:
PRO: The one you mentioned, server load reduced. Remember, you're paying for your server and your client is paying for the connection (sometimes figuratively, as in wait time). If you process server-side, you pay more; if you process client-side, the client pays more. I would let the client pay!
CON: On the other hand, the syntax highlighting will load faster if you process server-side, because you can process once then cache for all subsequent clients.
CON: Browser incompatibility, like you said.
PRO: Semantics. You're augumenting highlihgting on top of the raw data, rather than having the raw data strung up between <span>s. Think about non-JS machines trying to process your page.

Related

HTML5 LocalStorage as cache and single asset request

I would like to know what the limits, cons are of the following concept:
Requirements:
Browser with LocalStorage support.
Serverside asyncronous non-blocking i/o technology.
Lets imagine the following request flow:
client GET / request -> server. We call this stage "greeting", which is an interesting stage because the client now sends (also trough headers ofcourse) :
ip
browser
browser version
language
charset
server -> client (200 OK)
client -> IF OK
-> establish a websocket with the server
once the websocket has been established we enter the "asset stream" stage.
server -> looks for matching assets (stylesheets, images, javascript files, fonts etc.) that are specific for: language, browser, resolution specific assets) and STREAMS them through the websocket.
server -> request (websocket, async stream of assets)
BENEFIT 1. No multiple requests through the wire avoiding DNS lookups etc.
BENEFIT 2. Cache the hell out of these assets in localStorage, which is the following stage.
request -> put in LocalStorage cache.
request -> render website.
I would like to know get some opinions, what might be a good idea, what might not etc.
My first thoughts where:
CDN's not supported in this Architecture
We need one single request to get the javascript / html to start WebSocket etc.
I hope my question was clear.
Interesting approach, it's definitely worth thinking about. Let me be your devil's advocate:
BENEFIT 1. No multiple requests through the wire avoiding DNS lookups
etc.
This is true, although it's only an issue when you're accessing a page/site for the first time. It's also somewhat mitigated by prefetching that modern browsers implement. It's important to remember that browsers will download multiple resources in parallel, which could be faster, and definitely more progressively responsive, than downloading the whole payload in bulk.
With today's technologies you can already serve a full fledged pages and applications with only a handful of resources as far as a web client is concerned (all of them could be gziped!):
HTML
combined and minified CSS files as one resource
same for JS
image sprite
BENEFIT 2. Cache the hell out of these assets in localStorage...
Browsers already cache the hell out of such assets! In addition, there are proven and intelligent techniques to invalidate those caches (which is the second biggest challenge in software development).
Other things to consider:
Don't underestimate CDN. They are life savers when it comes to
latency. Your approach is not latency friendly during the first
request.
AJAX and progressive enhancement approaches can optimize web app
experience to make it feel like a desktop app already.
You will need to re-invent or modify tools like FireBug to work
with one stream containing all resources. No web development can be
imagined nowadays without those tools.
If browsers don't support this approach natively, then you will
still have a hell of a time programming and letting browser know
what your stream contains and how to handle it. By the time you
process the stream and fire all necessary events (in the optimal
sequence!) you might not gain as much benefits as you hoped for.
Good luck!

Why do web applications send HTML over the wire?

This question pertains to web applications. I have very little web app development experience, so might be missing some very obvious points/issues. Please point them out.
As I understand, in most web applications, a web server sends HTML over the wire to a client (browser). This happens every time a HTTP request is made. I feel this is very wasteful of bandwidth.
1) Since browsers can run JavaScript, why don't we just send a JavaScript program which can generate the webpage's HTML content (which the browser then renders).
2) Further a browser might cache the JavaScript program and next time the server only need send the data. The protocol might involve the browser sending the "program version" it has.
Consider an example of a relatively simple website Hacker News [http://news.ycombinator.com]. Let us separate the data (30 posts + their metadata) from its presentation. Assuming 1) above, the server can just send the data (say in JSON) + a JavaScript program to generate HTML. This gist shows the idea. The data for the 30 posts is in JSON [http://www.json.org/js.html] format. For this particular example the data transferred is cut in 1/2 (size of data+JavaScript / size of HTML). Further if browsers can do 2) above, it reduces the data transferred on each visit to 1/4 (size of data / size of HTML). [Note: this analysis is without considering compression; gzip,deflate is very successful in reducing the size of HTML. But isn't prevention better than cure?]
I see atleast the following advantages of this :-
* For most web pages, it will reduce the size of data transferred over the wire.
* Forces web apps to separate data from its presentation.
Disadvantages might include - more complex browsers, time to run the JavaScript program to generate HTML (this might get offset by the reduction in data size).
Now my question is - why are web applications not developed this way, or, why do web applications send HTML over the wire? Surely the web server (sending out HTML) doesn't care about HTML at all, so why should it, first, generate it, and then send it over the wire?
There are a few reasons, some of them historical this is by no means a complete list but just some of my experiences:
HTML predates JS, and a lot of scripts and libraries predate JS
Older browsers (think IE<=6) had rubbish, inconsistent JS engines, their rendering engines were much more consistent in how they treat HTML. So many more libraries and scripts predate consistent JS
It is a nightmare to debug applications written as you suggest if they are not constructed right (we have one at my work, it takes 30 minutes to find where a piece of html is actually generated)
It is a lot more work to do it right - why not use templates or static docs or something much simpler
Its not really a problem - HTML compresses really well
What you suggest is done - its called AJAX (OK, so ajax is more general than this but you all know what i mean)
It simply doesn't work for most plain-text user agents including those used by most search engines. If this page is serving most of your content, its generally a good idea to make it easy for Google to parse
Well the obvious reason on why this is the case is that JavaScript wasn't around when we started sending HTML around, and HTML was an improvement to sending around plaintext documents.
The reason we don't do this now: we eschew complex solutions to problems that aren't really problems.
Average internet connections download nearly 1M bytes per second, and web browsers are quite adept at parsing and starting to render this HTML before it's even all ready to be. They're also great at parallelizing the downloading of resources on the page. If we want to save a few bytes at the cost of some compute cycles, we gzip content before sending it. Problem solved.
And for the record, we do this with AJAX in complex webpages (checkout Github's source browsing for a great example of how awesome this can be).
What you suggest can, and is, done. Remember, web pages used to be static documents. Full blown web-based applications are a relatively recent idea.
I might also suggest that it isn't necessarily more efficient, especially when your pages are sent gzipped.
What you suggest is basically what a JavaScript full stack framework like ExtJS does. You can create rich, data intensive applications without writing any HTML -- well, only enough to reference the necessary .js libraries. The complex DOM needed for layouts, grids, forms etc is all created by the framework.
The simple answer is that HTML is older. Why is C99 not fully implemented with a lot of compilers? They figure 1989 is new enough for them. Also, JavaScript exercises a lot more control over people's browsers than they seem to want. Conditional statements and encoded data pose a security concern, and some people want to keep that can of worms closed to begin with. True, HTML is a very inefficient markup, but the size is insignificant compared to the images you download from the internet. That favicon takes up as much data as the page itself, and it's only 16 pixels across.
A good reason that the server-side code of a web application might do lots of HTML template work on the server side is that in many server environments it's not made easy to bundle up server-side data structures (object graphs) for easy delivery to the client. There may be information kept in server-side data structures that really shouldn't be delivered out to the client. Thus in order to send out a "pure" data-only response, the server would have to trim off sensitive data before delivering out the JSON. That's not an unsolvable problem, but I don't know of many server frameworks that facilitate a solution.
The server has direct, unfettered access to the database and to everything else that makes an application work: user preferences, history, account details, system settings, etc. To build an application that's client-centric for rendering purposes would mean concocting ways of keeping all that information intact and up-to-date on the client. For a lot of applications, that might not be terribly easy.
Finally, it's only relatively recently that it would make sense to trust a browser to provide a stable enough platform for building a long-lived "application environment" as a continually-updating web page. By building a web app such that pages are sometimes completely reloaded, there are lots of little "reboots". That's a cheap and dumb way of keeping a lid on at least some kinds of memory leaks.
Most implementations of sites with heavy Javascript use won't start executing until the DOM has fully loaded; then you'll get every page with 'loading screens' when the page wrapper has downloaded, but none of the content has.
Also, do remember that not all users have Javascript enabled, and not all browsers support high-level Javascript (think mobiles).
I would send HTML in a response if I wanted my application to work without Javascript. I would write HTML rendering code in my server-side language (most of the time not Javascript), which could then be used for two purposes: serving whole HTML pages, and serving bits of HTML in response to XHRs.
If the Javascript code is restricted to things like reporting UI events and replacing innerHTML with server-generated code, I don't have to duplicate any of my application logic across languages/frameworks. This duplication problem is one of the reasons why server-side Javascript is getting people excited.

Find what has been changed and upload only changes

I'm just looking for ideas/suggestions here; I'm not asking for a full on solution (although if you have one, I'd be happy to look at it)
I'm trying to find a way to only upload changes to text. It's most likely going to be used as a cloud-based application running on jQuery and HTML, with a PHP server running the back-end.
For example, if I have text like
asdfghjklasdfghjkl
And I change it to
asdfghjklXasdfghjkl
I don't want to have to upload the whole thing (the text can get pretty big)
For example, something like 8,X sent to the server could signify:
add an X to the 8th position
Or D8,3 could signify:
go to position 8 and delete the previous 3 terms
However, if a single request is corrupted en route to the server, the whole document could be corrupted since the positions would be changed. A simple hash could detect corruption, but then how would one go about recovering from the corruption? The client will have all of the data, but the data is possibly very large, and it is unlikely to be possible to upload.
So thanks for reading through this. Here is a short summary of what needs suggestions
Change/Modification Detection
Method to communicate the changes
Recovery from corruption
Anything else that needs improvement
There is already an accepted form for transmitting this kind of "differences" information. It's called Unified Diff.
The google-diff-match-patch provides implementations in Java, JavaScript, C++, C#, Lua and Python.
You should be able to just keep the "original text" and the "modified text" in variables on the client, then generate the diff in javascript (via diff-match-patch), send it to the server, along with a hash, and re-construct it (either using diff-match-patch or the unix "patch" program) on the server.
You might also want to consider including a "version" (or a modified date) when you send the original text to the client in the first place. Then include the same version (or date) in the "diff request" that the client sends up to the server. Verify the version on the server prior to applying the diff, so as to be sure that the server's copy of the text has not diverged from the client's copy while the modification was being made. (of course, in order for this to work, you'll need to update the version number on the server every time the master copy is updated).
You have a really interesting approach. But if the text files are really so large that it would need too much time to upload them every time, why do you have the send the whole thing to the client? Does the client really have to receive the whole 5mb text file? Wouldn't it be possible to send him only what he needs?
Anyway, to your question:
The first thing that comes to my mind when hearing "large text files" and modification detection is diff. For the algorithm, read here. This could be an approach to commit the changes, and it specifies a format for it. You'd just have to rebuild diff (or a part of it) in javascript. This will be not easy, but possible, as I guess. If the algorithm doesn't help you, possibly at least the definition of the diff file format does.
To the corruption issue: You don't have to fear that your date gets corrupted on the way, because the TCP protocol, on which HTTP is based, looks that everything arrives without being corrupted. What you should fear is the connection reset. Might be you can do something like a handshake? When the client sends an update to the server, the server applies the modifications and keeps one old version of the file. To ensure that the client has received the ratification from the server that the modification went fine (that's where the conneciton reset happens), the client sends back another ajax request to the server. If this one doesn't come to the server within sone definied time, the file gets reset on the server side.
Another thing: I don't know if javascript likes it to handle such gigantic files/data...
This sounds like a problem that versioning systems (CVS, SVN, Git, Bazaar) already solve very well.
They're all reasonably easy to set up on a server, and you can communicate with them through PHP.
After the setup, you'd get for free: versioning, log, rollback, handling of concurrent changes, proper diff syntax, tagging, branches...
You wouldn't get the 'send just the updates' functionality that you asked for. I'm not sure how important that is to you. Pure texts are really very cheap to send as far as bandwidth is concerned.
Personally, I would probably make a compromise similar to what Wikis do. Break down the whole text into smaller semantically coherent chunks (chapters, or even paragraphs), determine on the client side just which chunks have been edited (without going down to the character level), and send those.
The server could then answer with a diff, generated by your versioning system, which is something they do very efficiently. If you want to allow concurrent changes, you might run into cases where editors have to do manual merges, anyway.
Another general hint might be to look at what Google did with Wave. I have to remain general here, because I haven't really studied it in detail myself, but I seem to remember that there have been a few articles about how they've solved the real-time concurrent editing problem, which seems to be exactly what you'd like to do.
In summary, I believe the problem you're planning to tackle is far from trivial, there are tools that address many of the associated problems already, and I personally would compromise and reformulate the approach in favor of much less workload.

Security and JavaScript files containing a site's logic

Now that JavaScript libraries like jQuery are more popular than ever, .js files are starting to contain more and more of a site's logic. How and where it pulls data/information from, how that info is processed, etc. This isn't necessarily a bad thing, but I'm wondering to what extend this might be a security concern.
Of course the real processing of data still happens in the backend using PHP or some other language, and it is key that you make sure that nothing unwanted happens at that point. But just by looking at the .js of a site (that relies heavily on e.g. jQuery), it'll tell a person maybe more than you, as a developer, would like. Especially since every browser nowadays comes with a fairly extensive web developer environment or add-on. Even for a novice manipulating the DOM isn't that big of a deal anymore. And once you figure out what code there is, and how you might be able to influence it by editing the DOM, the 'fun' starts.
So my main concerns are:
I don't want everyone to be able to look at a .js file and see exactly (or rather: for a large part) how my site, web app or CMS works — what is there, what it does, how it does it, etc.
I'm worried that by 'unveiling' this information, people who are a lot smarter than I am figure out a way to manipulate the DOM in order to influence JavaScript functions they now know the site uses, possibly bypassing backend checks that I implemented (and thus wrongly assuming they were good enough).
I already use different .js files for different parts of e.g. a web app. But there's always stuff that has to be globally available, and sometimes this contains more than I'd like to be public. And since it's all "out there", who's to say they can't find those other files anyway.
I sometimes see a huge chuck of JavaScript without line breaks and all that. Like the compact jQuery files. I'm sure there are applications or tricks to convert your normal .js file to one long string. But if it can do that, isn't it just as easy to turn it back to something more readable (making it pointless except for saving space)?
Lastly I was thinking about whether it was possible to detect if a request for a .js file comes from the site itself (by including the script in the HTML), instead of a direct download. Maybe by blocking the latter using e.g. Apache's ModRewrite, it's possible to use a .js file in the HTML, but when someone tries to access it, it's blocked.
What are your thoughts about this? Am I overreacting? Should I split my JS as much as possible or just spend more time triple checking (backend) scripts and including more checks to prevent harm-doing? Or are there some best-practices to limit the exposure of JavaScripts and all the info they contain?
Nothing in your JavaScript should be a security risk, if you've set things up right. Attempting to access an AJAX endpoint one finds in a JavaScript file should check the user's permissions and fail if they don't have the right ones.
Having someone view your JavaScript is only a security risk if you're doing something broken like having calls to something like /ajax/secret_endpoint_that_requires_no_authentication.php, in which case your issue isn't insecure JavaScript, it's insecure code.
I sometimes see a huge chuck of JavaScript without line breaks and all that. Like the compact jQuery files. I'm sure there are applications or tricks to convert your normal .js file to one long string. But if it can do that, isn't it just as easy to turn it back to something more readable (making it pointless except for saving space)?
This is generally minification (to reduce bandwidth usage), not obfuscation. It is easily reversible. There are obfuscation techniques that'll make all variable and function names something useless like "aa", "bb", etc., but they're reversible with enough effort.
Lastly I was thinking about whether it was possible to detect if a request for a .js file comes from the site itself (by including the script in the HTML), instead of a direct download. Maybe by blocking the latter using e.g. Apache's ModRewrite, it's possible to use a .js file in the HTML, but when someone tries to access it, it's blocked.
It's possible to do this, but it's easily worked around by any half-competent attacker. Bottom line: nothing you send a non-privileged user's browser should ever be sensitive data.
Of course you should spend more time checking back-end scripts. You have to approach the security problem as if the attacker is one of the key developers on your site, somebody who knows exactly how everything works. Every single URL in your site that does something to your database has to be protected to make sure that every parameter is within allowed constraints: a user can only change their own data, can only make changes within legal ranges, can only change things in a state that allows changes, etc etc etc. None of that has anything at all to do with what your Javascript looks like or whether or not anyone can read it, and jQuery has nothing at all to do with the problem (unless you've done it all wrong).
Remember: an HTTP request to your site can come from anywhere and be initiated by any piece of software in the universe. You have no control over that, and nothing you do to place restrictions on what clients can load what pages will have any effect on that. Don't bother with "REFERER" checks because the values can be faked. Don't rely on data scrubbing routines in your Javascript because those can be bypassed.
Well, you're right to be thinking about this stuff. It's a non-trivial and much misunderstood area of web application development.
In my opinion, the answer is that yes it can create more security issues, simply because (as you point out) the vectors for attack are increased. Fundamentally not much changes from a traditional (non JS) web application and the same best practises and approaches will server you very well. Eg, watching out for SQL injection, buffer overflows, response splitting, etc... You just have more places you need to watch out for it.
In terms of the scripts themselves, the issues around cross-domain security are probably the most prevalent. Research and learn how to avoid XSS attacks in particular, and also CSRF attacks.
JavaScript obfuscation is not typically carried out for security reasons, and you're right that it can be fairly easily reverse engineered. People do it, partially to protect intellectual property, but mainly to make the code download weight smaller.
I'd recommend Christopher Wells book published by O'Reilly called 'Securing Ajax Applications'.
There is free software that does JavaScript Obfuscation. Although there is not security though obscurity. This does not prevent all attacks against your system. It does make it more difficult, but not impossible for other people to rip off your JavaScript and use it.
There is also the issue of client side trust. By having a lot of logic on the client side the client is given the power to choose what it wants to execute. For instance if you are escaping quote marks in JavaScript to protect against SQL Injection. A Hacker is going to write exploit code to build his own HTTP request bypassing the escaping routines altogether.
TamperData and FireBug are commonly used by hackers to gain a deeper understanding of a Web Application.
JavaScript code alone CAN have vulnerabilities in it. A good example is DOM Based XSS. Although I admit this is not a very common type of XSS.
Here's a book by Billy Hoffman about Ajax security:
http://www.amazon.com/Ajax-Security-Billy-Hoffman/dp/0321491939/ref=sr_1_1?ie=UTF8&s=books&qid=1266538410&sr=1-1

severside processing vs client side processing + ajax?

looking for some general advice and/or thoughts...
i'm creating what i think to be more of a web application then web page, because i intend it to be like a gmail app where you would leave the page open all day long while getting updates "pushed" to the page (for the interested i'm using the comet programming technique). i've never created a web page before that was so rich in ajax and javascript (i am now a huge fan of jquery). because of this, time and time again when i'm implementing a new feature that requires a dynamic change in the UI that the server needs to know about, i am faced with the same question:
1) should i do all the processing on the client in javascript and post back as little as possible via ajax
or
2) should i post a request to the server via ajax, have the server do all the processing and then send back the new html. then on the ajax response i do a simple assignment with the new HTML
i have been inclined to always follow #1. this web app i imagine may get pretty chatty with all the ajax requests. my thought is minimize as much as possible the size of the requests and responses, and rely on the continuously improving javascript engines to do as much of the processing and UI updates as possible. i've discovered with jquery i can do so much on the client side that i wouldn't have been able to do very easily before. my javascript code is actually much bigger and more complex than my serverside code. there are also simple calulcations i need to perform and i've pushed that on the client side, too.
i guess the main question i have is, should we ALWAYS strive for client side processing over server side processing whenever possible? i 've always felt the less the server has to handle the better for scalability/performance. let the power of the client's processor do all the hard work (if possible).
thoughts?
There are several considerations when deciding if new HTML fragments created by an ajax request should be constructed on the server or client side. Some things to consider:
Performance. The work your server has to do is what you should be concerned with. By doing more of the processing on the client side, you reduce the amount of work the server does, and speed things up. If the server can send a small bit of JSON instead of giant HTML fragment, for example, it'd be much more efficient to let the client do it. In situations where it's a small amount of data being sent either way, the difference is probably negligible.
Readability. The disadvantage to generating markup in your JavaScript is that it's much harder to read and maintain the code. Embedding HTML in quoted strings is nasty to look at in a text editor with syntax coloring set to JavaScript and makes for more difficult editing.
Separation of data, presentation, and behavior. Along the lines of readability, having HTML fragments in your JavaScript doesn't make much sense for code organization. HTML templates should handle the markup and JavaScript should be left alone to handle the behavior of your application. The contents of an HTML fragment being inserted into a page is not relevant to your JavaScript code, just the fact that it's being inserted, where, and when.
I tend to lean more toward returning HTML fragments from the server when dealing with ajax responses, for the readability and code organization reasons I mention above. Of course, it all depends on how your application works, how processing intensive the ajax responses are, and how much traffic the app is getting. If the server is having to do significant work in generating these responses and is causing a bottleneck, then it may be more important to push the work to the client and forego other considerations.
I'm currently working on a pretty computationally-heavy application right now and I'm rendering almost all of it on the client-side. I don't know exactly what your application is going to be doing (more details would be great), but I'd say your application could probably do the same. Just make sure all of your security- and database-related code lies on the server-side, because not doing so will open security holes in your application. Here are some general guidelines that I follow:
Don't ever rely on the user having a super-fast browser or computer. Some people are using Internet Explore 7 on old machines, and if it's too slow for them, you're going to lose a lot of potential customers. Test on as many different browsers and machines as possible.
Any time you have some code that could potentially slow down or freeze the browser momentarily, show a feedback mechanism (in most cases a simple "Loading" message will do) to tell the user that something is indeed going on, and the browser didn't just randomly freeze.
Try to load as much as you can during initialization and cache everything. In my application, I'm doing something similar to Gmail: show a loading bar, load up everything that the application will ever need, and then give the user a smooth experience from there on out. Yes, they're going to have to potentially wait a couple seconds for it to load, but after that there should be no problems.
Minimize DOM manipulation. Raw number-crunching JavaScript performance might be "fast enough", but access to the DOM is still slow. Avoid creating and destroying elements; instead simply hide them if you don't need them at the moment.
I recently ran into the same problem and decided to go with browser side processing, everything worked great in FF and IE8 and IE8 in 7 mode, but then... our client, using Internet Explorer 7 ran into problems, the application would freeze up and a script timeout box would appear, I had put too much work into the solution to throw it away so I ended up spending an hour or so optimizing the script and adding setTimeout wherever possible.
My suggestions?
If possible, keep non-critical calculations client side.
To keep data transfers low, use JSON and let the client side sort out the HTML.
Test your script using the lowest common denominator.
If needed use the profiling feature in FireBug. Corollary: use the uncompressed (development) version of jQuery.
I agree with you. Push as much as possible to users, but not too much. If your app slows or even worse crashes their browser you loose.
My advice is to actually test how you application acts when turned on for all day. Check that there are no memory leaks. Check that there isn't a ajax request created every half of second after working with application for a while (timers in JS can be a pain sometime).
Apart from that never perform user input validation with javascript. Always duplicate it on server.
Edit
Use jquery live binding. It will save you a lot of time when rebinding generated content and will make your architecture more clear. Sadly when I was developing with jQuery it wasn't available yet; we used other tools with same effect.
In past I also had a problem when one page part generation using ajax depends on other part generation. Generating first part first and second part second will make your page slower as expected. Plan this in front. Develop a pages so that they already have all content when opened.
Also (regarding simple pages too), keep number of referenced files on one server low. Join javascript and css libraries into one file on server side. Keep images on separate host, better separate hosts (creating just a third level domain will do too). Though this is worth it only on production; it will make development process more difficult.
Of course it depends on the data, but a majority of the time if you can push it client side, do. Make the client do more of the processing and use less bandwidth. (Again this depends on the data, you can get into cases that you have to send more data across to do it client side).
Some stuff like security checks should always be done on the server. If you have a computation that takes a lot of data and produces less data, also put it on the server.
Incidentally, did you know you could run Javascript on the server side, rendering templates and hitting databases? Check out the CommonJS ecosystem.
There could also be cross-browser support issues. If you're using a cross-browser, client-side library (eg JQuery) and it can handle all the processing you need then you can let the library take care of it. Generating cross-browser HTML server-side can be harder (tends to be more manual), depending on the complexity of the markup.
this is possible, but with the heavy intial page load && heavy use of caching. take gmail as an example
On initial page load, it downloads most of the js files it needed to run. And most of all cached.
dont over use of images and graphics.
Load all the data need to show in intial load and along with the subsequent predictable user data. in gmail & latest yahoo mail the inbox is not only populated with the single mail conversation body, It loads first few full email messages in advance at the time of pageload. secret of high resposiveness comes with the cost (gmail asks to load the light version if the bandwidth is low.i bet most of us have experienced ).
follow KISS principle. means keep ur desgin simple.
And never try to render the whole page using javascript in any case, you cannot predict all your endusers using the high config systems or high bandwidth systems.
Its smart to split the workload between your server and client.
If you think in the future you might want to create an API for your application (communicating with iPhone or android apps, letting other sites integrate with yours,) your would have to duplicate a bunch of code for all those devices if you go with a bare-bones server implementation of your application.

Categories

Resources