I have written pure JavaScript front ends before and started noticing performance decrease when working with large stores of data. I have tried using xml and json, but in both cases, it was a lot for the browser to handle.
That poses my question, which is how much is too much?
You can't know, not exactly and not always. You can make a good guess.
It depends on the browser, OS, RAM, CPU, what else is running at that moment, how fast their connection is, what else they're transferring, etc.
Figure out several situations you expect for your average user, and test those. Add for various best, worst, and interesting (e.g. mobile, tablet) cases.
You can, of course, apply experience and extrapolate from your specific cases, and the answer will change for the future.
But don't fall into the trap of "it works for me!"
I commonly see this with screen resolutions: as those have increased, it's much more popular to have multiple windows visible at the same time. In 1995 it was rare for me to not have something maximized; now fifteen years later, it's exactly the opposite.
Yet sometimes people will design some software or a website, use lower contrast[1], maximize it, and connect to a server on localhost—and that's the only evaluation they do.
[1] Because they know what the text says and don't need to read it themselves, so lower contrast looks aesthetically better.
In my opinion, if you need to stop and think about this issue, then the data is too much. In general you should design your applications so that users with a low-end netbooks and/or slow internet connections are still able to run them. Also keep in my mind that more often than not your application isn't the only page your users are visiting at the same time.
My recommendation is to use Firefox with Firebug to do some measurements. See how long a request takes to complete in a modest configuration. If it takes noticeable time for the browser to render data, then you'd better off doing a redesign.
A good guiding principle should be that instead of worrying about whether the browser can handle the volume of data you're sending it, worry about whether your user can handle it. It all depends on the presentation of course (i.e., a lot of data bound for a visualization tool that'll render a complex graph in a canvas is different than a lot of raw numbers bound for a gigantic table), but in my experience a user's brain reaches data overload before the browser/network/client computer.
It really depends on the form that your external data is going to take in your Javascript. If you want to load all your data at once and keep it in memory as a large object with lots of properties (associative array), then you will find that most current desktops can only handle about 100k entries (with small key-value pairs) before performance really degrades.
If it is possible, you should see if there are ways to only load the data that is needed by the user for a given request / interaction. You can use AJAX to request needed data and prefetch data that you think the user may need.
Related
I'm working on a BIG project, written in RoR with jQuery frontend. I'm adding AngularJS which has intelligent dependency injection, but what I want to know is how much javascript can I put on a page before the page becomes noticeably slow? What are the specific limits of each browser?
Assuming that my code is well factored and all operations run in constant time, how many functions, objects, and other things can I allocate in javascript before the browser hits it's limit (which there must be one, because any computer has a finite amount of RAM and disk space (although disk space would be an ambitious limit to hit with javascript)
I've looked online but I've only seen questions about people asking how many assets they can load, i.e. how many megabytes can I load etc. I want to know if there is an actual computation limit set out by browsers and how they differ
-- EDIT --
For the highly critical, I guess a better question is
How does a modern web browser determin the limit for the resources it allocates to a page? How much memory is a webpage allowed to use? How much disk space can a page use?
Obviously I use AJAX, I know a decent amount about render optimization. It's not a question of how can I make my page faster, but rather what is my resource limitation?
Although technically, it sounds a monumental task to reach the limits of a client machine, it's actually very easy to reach these limits with an accidental loop. Everyone has done it at least once!
It's easy enough to test, write a JS loop that will use huge amounts of memory and you'll find the memory usage of your PC will peg out and will indeed consume your virtual memory too, before the browser will fall over.
I'd say, from experience, even if you don't get anywhere near the technological limits you're talking about, the limits of patience of your visitors/users will run out before the resources.
Maybe it's worth looking at AJAX solutions in order to load relevant parts of the page at a time if loading times turn out to be an issue.
Generally, you want to minify and package your javascript to reduce initial page requests as much as possible. Your web application should mainly consist of one javascript file when you're all done, but its not always possible as certain plugins might not be compatible with your dependency management framework.
I would argue that a single page application that starts to exceed 3mb or 60 requests on an initial page load (with cache turned off) is starting to get too big and unruly. You'll want to start looking for ways of distilling copy-and-pasted code down into extendable, reusable objects, and possibly dividing the one big application into a collection of smaller apps that all use the same library of models, collections, and views across all of them. If using RequireJS (what I use) you'll end up with different builds that will need to be compiled before launching any code if any of the dependencies contained within that build have changed.
Now, as for the 'speed' of your application, look at tutorials for render optimization for your chosen framework. Tricks like appending a model's view one-by-one as they are added to the collection results in a faster rendering page then trying to attach a huge blob of html all at once. Be careful of memory leaks. Ensure you're closing references to your views when switching between the pages of your single page application. Create an 'onClose' method in your views that ensures all subviews and attached data references are destroyed when the view itself is close, and garbage collection will do the rest. Use a global variable for storing your collections and models. Something like window.app.data = {}. Use a global view controller for navigating between the main sections of your application, which will help you close out view chains effectively. Use lazy-loading wherever possible. Use 'base' models, collections, and views and extend them. Doing this will give you more options later on for controlling global behavior of these things.
This is all stuff you sort of learn from experience over time, but if enough care is taken, it's possible to create a well-running single page application on your first try. You're more likely to discover problems with the application design as you go though, so be prepared to refactor your code as these problems come up.
It depends much more on the computer than the browser - a computer with a slow CPU and limited amount of RAM will slow down much sooner than a beefy desktop.
A good proxy for this might be to test the site on a few different smartphones.
Also, slower devices sometimes run outdated and/or less feature-rich browsers, so you could do some basic user-agent sniffing or feature detection on the client and fall back to plane server-rendered HTML.
The more proficient I become with javascript, the more performance aware I want to be. For example, I had autocomplete box that would hit the server every so often to try auto-complete the users request. Exactly how google and most websites are enhancing their search. Now my question is if I am querying the FB api for a list of friends and storing these in a javascript object ( +- 700 with their full name and userid), how will this effect performance? I cant imagine it would be worse than constantly hitting the server for the request. Storing this info locally and then querying seems much more proficient. Where do I draw the line in storing information in a javascript object. I would like my server to as little as possible and have adopted the philopsophy on letting the client do 80% of the work and the server the remaining 20%. Obviously I want the client to experience a smooth application. How do the js ninjas test the performance of their application?
The most efficient solution - in terms of reduced network latency - would be to retrieve the friend list, store it using something like the localStorage API, and update the list in the background every once in a while. Keeping a list of this size in memory really shouldn't affect page performance much if at all (although on mobile this may not be the case). If someone has thousands of friends then you may see a slow-down, especially when reading the data from localStorage, but still, I doubt it.
Deciding where to draw the line is harder; it depends on so many factors. TBH this is something you may only know once your product is released and your users can give you feedback on performance, allowing you to fine-tune. A/B testing can help a lot in this regard.
looking for some general advice and/or thoughts...
i'm creating what i think to be more of a web application then web page, because i intend it to be like a gmail app where you would leave the page open all day long while getting updates "pushed" to the page (for the interested i'm using the comet programming technique). i've never created a web page before that was so rich in ajax and javascript (i am now a huge fan of jquery). because of this, time and time again when i'm implementing a new feature that requires a dynamic change in the UI that the server needs to know about, i am faced with the same question:
1) should i do all the processing on the client in javascript and post back as little as possible via ajax
or
2) should i post a request to the server via ajax, have the server do all the processing and then send back the new html. then on the ajax response i do a simple assignment with the new HTML
i have been inclined to always follow #1. this web app i imagine may get pretty chatty with all the ajax requests. my thought is minimize as much as possible the size of the requests and responses, and rely on the continuously improving javascript engines to do as much of the processing and UI updates as possible. i've discovered with jquery i can do so much on the client side that i wouldn't have been able to do very easily before. my javascript code is actually much bigger and more complex than my serverside code. there are also simple calulcations i need to perform and i've pushed that on the client side, too.
i guess the main question i have is, should we ALWAYS strive for client side processing over server side processing whenever possible? i 've always felt the less the server has to handle the better for scalability/performance. let the power of the client's processor do all the hard work (if possible).
thoughts?
There are several considerations when deciding if new HTML fragments created by an ajax request should be constructed on the server or client side. Some things to consider:
Performance. The work your server has to do is what you should be concerned with. By doing more of the processing on the client side, you reduce the amount of work the server does, and speed things up. If the server can send a small bit of JSON instead of giant HTML fragment, for example, it'd be much more efficient to let the client do it. In situations where it's a small amount of data being sent either way, the difference is probably negligible.
Readability. The disadvantage to generating markup in your JavaScript is that it's much harder to read and maintain the code. Embedding HTML in quoted strings is nasty to look at in a text editor with syntax coloring set to JavaScript and makes for more difficult editing.
Separation of data, presentation, and behavior. Along the lines of readability, having HTML fragments in your JavaScript doesn't make much sense for code organization. HTML templates should handle the markup and JavaScript should be left alone to handle the behavior of your application. The contents of an HTML fragment being inserted into a page is not relevant to your JavaScript code, just the fact that it's being inserted, where, and when.
I tend to lean more toward returning HTML fragments from the server when dealing with ajax responses, for the readability and code organization reasons I mention above. Of course, it all depends on how your application works, how processing intensive the ajax responses are, and how much traffic the app is getting. If the server is having to do significant work in generating these responses and is causing a bottleneck, then it may be more important to push the work to the client and forego other considerations.
I'm currently working on a pretty computationally-heavy application right now and I'm rendering almost all of it on the client-side. I don't know exactly what your application is going to be doing (more details would be great), but I'd say your application could probably do the same. Just make sure all of your security- and database-related code lies on the server-side, because not doing so will open security holes in your application. Here are some general guidelines that I follow:
Don't ever rely on the user having a super-fast browser or computer. Some people are using Internet Explore 7 on old machines, and if it's too slow for them, you're going to lose a lot of potential customers. Test on as many different browsers and machines as possible.
Any time you have some code that could potentially slow down or freeze the browser momentarily, show a feedback mechanism (in most cases a simple "Loading" message will do) to tell the user that something is indeed going on, and the browser didn't just randomly freeze.
Try to load as much as you can during initialization and cache everything. In my application, I'm doing something similar to Gmail: show a loading bar, load up everything that the application will ever need, and then give the user a smooth experience from there on out. Yes, they're going to have to potentially wait a couple seconds for it to load, but after that there should be no problems.
Minimize DOM manipulation. Raw number-crunching JavaScript performance might be "fast enough", but access to the DOM is still slow. Avoid creating and destroying elements; instead simply hide them if you don't need them at the moment.
I recently ran into the same problem and decided to go with browser side processing, everything worked great in FF and IE8 and IE8 in 7 mode, but then... our client, using Internet Explorer 7 ran into problems, the application would freeze up and a script timeout box would appear, I had put too much work into the solution to throw it away so I ended up spending an hour or so optimizing the script and adding setTimeout wherever possible.
My suggestions?
If possible, keep non-critical calculations client side.
To keep data transfers low, use JSON and let the client side sort out the HTML.
Test your script using the lowest common denominator.
If needed use the profiling feature in FireBug. Corollary: use the uncompressed (development) version of jQuery.
I agree with you. Push as much as possible to users, but not too much. If your app slows or even worse crashes their browser you loose.
My advice is to actually test how you application acts when turned on for all day. Check that there are no memory leaks. Check that there isn't a ajax request created every half of second after working with application for a while (timers in JS can be a pain sometime).
Apart from that never perform user input validation with javascript. Always duplicate it on server.
Edit
Use jquery live binding. It will save you a lot of time when rebinding generated content and will make your architecture more clear. Sadly when I was developing with jQuery it wasn't available yet; we used other tools with same effect.
In past I also had a problem when one page part generation using ajax depends on other part generation. Generating first part first and second part second will make your page slower as expected. Plan this in front. Develop a pages so that they already have all content when opened.
Also (regarding simple pages too), keep number of referenced files on one server low. Join javascript and css libraries into one file on server side. Keep images on separate host, better separate hosts (creating just a third level domain will do too). Though this is worth it only on production; it will make development process more difficult.
Of course it depends on the data, but a majority of the time if you can push it client side, do. Make the client do more of the processing and use less bandwidth. (Again this depends on the data, you can get into cases that you have to send more data across to do it client side).
Some stuff like security checks should always be done on the server. If you have a computation that takes a lot of data and produces less data, also put it on the server.
Incidentally, did you know you could run Javascript on the server side, rendering templates and hitting databases? Check out the CommonJS ecosystem.
There could also be cross-browser support issues. If you're using a cross-browser, client-side library (eg JQuery) and it can handle all the processing you need then you can let the library take care of it. Generating cross-browser HTML server-side can be harder (tends to be more manual), depending on the complexity of the markup.
this is possible, but with the heavy intial page load && heavy use of caching. take gmail as an example
On initial page load, it downloads most of the js files it needed to run. And most of all cached.
dont over use of images and graphics.
Load all the data need to show in intial load and along with the subsequent predictable user data. in gmail & latest yahoo mail the inbox is not only populated with the single mail conversation body, It loads first few full email messages in advance at the time of pageload. secret of high resposiveness comes with the cost (gmail asks to load the light version if the bandwidth is low.i bet most of us have experienced ).
follow KISS principle. means keep ur desgin simple.
And never try to render the whole page using javascript in any case, you cannot predict all your endusers using the high config systems or high bandwidth systems.
Its smart to split the workload between your server and client.
If you think in the future you might want to create an API for your application (communicating with iPhone or android apps, letting other sites integrate with yours,) your would have to duplicate a bunch of code for all those devices if you go with a bare-bones server implementation of your application.
So, my question is sort of based on experience, so I'm wondering about those out there who have tried to load datasets out there, what's a reasonable amount of data to load. My users have relatively big pipes, so I don't have to worry about modem users, but I do concern myself with processing times. I'm guessing my limit is somewhere in the 300-1024k range, but does anyone have a way or a website which has done something which can be a little more definitive?
I've run across this resource. It's from 2005, so I'd consider it out of date even though the general lesson seems to be pretty sound:
http://blogs.nitobi.com/dave/2005/09/29/javascript-benchmarking-iv-json-revisited/
I also came across this:
http://www.jamesward.com/census/
Is there is anything else out there worth checking into?
A typical JSON packet can (and should) be compressed using gzip by the web server to approx. 10% its initial size. So you're really looking at 30-100k. If those responses can be cached, then it's even less of a problem.
The size of the transmission should not be the deciding factor in whether a packet is "too much". Instead, look at how long it will take the browser to process this packet (update the UI, etc).
Actually parsing the JSON should be very fast, up to many megabytes of data. Turning that into something new in the UI will largely depend on how complicated the HTML you're producing is.
My boss wants to draw the local network and then, if you click on one of the computers or roll the mouse over one, he wants to see stuff like RAM, CPU, OS, etc. This has to be done in a browser, more specifically, the intranet's wiki.
One of my coworkers suggested using flash (I am a complete noob but I assume ActionScript is what would be used?) and I think it could also be done in javascript but I dunno. Not sure what would be better.
He wants it to be extensible if possible, so adding another computer later or editing values shouldn't be too hard, though the topology shouldn't change very often. This may be up to me to code a separated way to edit it though, I dunno.
Any thoughts/recommendations?
Up until one of the recent versions of Visio (2003?) there was a very useful network discovery tool that would build the diagram. Then using the Save As HTML option there are a number of different ways to build the clickable diagram.
I'd imagine other network diagrammers can do the same.
This is the easiest way I've ever found to do what you want. The only downside I'm aware of is that the Visio discovery will send a significant number of packets; it can flood a network. However in my opinion if your network is that susceptable to load you want to know ASAP. Preferably with a job you can stop and start at will.
Don;t forget that any process you arrive at should be able to rebuild your diagram regularly, e.g. once a week or a month.
The technology you want might be called an HTML Image Map.
It might be cost-effective to purchase a monitoring system.
OpManager is pretty easy to use and provides the graphing capabilities.
Paessler is less expensive, still details, but doesn't have the graphing capabilities.