Javascript Optimization - Facing issues of slow GUI - javascript

I am working on web app in which I am using javascript on client side to handle validation/calling backend cgi scripts etc kind of things.Now my problem is the WebGUI has become some what slow than it was earlier.Actually everything is working fine as expected but this issue.
I know that there may be many issues which are directly affecting the speed of application.
After all all the application is depended on cgi script response but still is it possible to make the app faster by taking care of certain javascript functions??
So can you please suggest that what are the steps I should take care to make javascript execution some what faster ???(i.e less number of LOC)
Thanks in advance...

As long as you do not post some js code, it will be kind of difficult to help you.
Still, speaking in general: Keep your DOM-Manipulations to a minimum, especially when dealing with lots of data. Do not use jquery functions that affect the dom (append, insertBefore, insertAfter) in a loop - try to do all your preparation in a loop and then call DOM-affecting functions once having all the changes together.
But of course, i do not know if this is the case in any code of your application.

Related

AJAX on Android Webview: Should I use JSON or HTML?

I am building a feed like application with infinite scroll inside my native Android application using webview. I first tried Ionic framework but really disappointed with performance.
I am now thinking of developing it in pure JQuery and responsive HTML/CSS to achieve superior performance. One thing I am considering is weather to emit HTML or JSON from the server side APIs.
Emitting JSON means client side DOM manipulation which can again hurt performance. I am looking for maximum performance.
Whereas, I am not very sure emitting HTML is a very good idea for better maintainability of this application.
What do you guys recommend?
Injecting JSON should be a better solution.
In case some manipulation or operation on data is required in a
later phase of development.
Possible security issue.You are mixing control(tags) and data. By preventing inline script execution you can protect against attacks to the most extend.
This said if properly implemented and for the right reasons HTML might not be a total bad choice. But, the performance over-head might not be the right reason. But then again I haven't tested to confirm this.
You can also have a look at this SO question.

Ruby plugin for web browser?

Am I correct that if someone wrote a Ruby plugin for a web browser and a user installed that plugin then it would be possible to replace javascript with ruby on the frontend?
Aren't there any plugins for this? Or even for using other languages than javascript on the browser side?
You could use http://ironruby.net/ in a Silverlight Plugin, but I have not a clue about how easy DOM interaction is this way.
But I BEG YOU don't do it! Please, use the Open Web Stack to solve your problems.
If you don't leave your Ruby world of comfort, you will not only hurt your users experience "WTF? Why do I need Silverlight for this page?" but you will also get stuck in your small little Ruby world without learning anything new and exciting.
It would be better for both of you, if you'd just go ahead and learn JavaScript.
Because remember: "Learning is a good thing!"
One thing is A FACT: as of 2010 JavaScript does not have a thread stopping "sleep" function (other than the one that just burns CPU cycles).
I have been working with JavaScript for at least a year before posting this comment and I have come to a conclusion that the lack of a thread-stopping sleep function is a real show-stopper for threading related code.
A consequence of the lack of the sleep function is that it's not possible to simulate a Ruby/C#/C++/etc. like threading model in JavaScript, which in turn means that it's not possible to translate any of the threading enabled languages to JavaScript, no matter, what one does, unless the JavaScript is supplemented with a (preferably non-CPU-cycle-burning) sleep function.
If one surfs around, then one can find many comments that state that the sleep function is not even necessary, that the setTimeout is sufficient, etc., but I guess that people, who state that, have not tried to implement a threading framework in JavaScript. (Think of mutexes, critical sections. I refuse to go into a discussion that the critical sections/synchronization are/is not necessary for cases, where widget content consists of multiple data components that form an "atomic whole".)
The second show-stopper for the whole DOM-model is the implementation that renders DOM elements IN THE BACKGROUND THREAD.
Here's, what happens:
In Javascript:
create_my_awsome_widget_in_DOM();
edit_my_awsome_widget_by_editing_DOM_inside_it()
if_we_are_lucky_we_reach_here_without_crashing_the_app()
As the DOM is rendered in background (read: in a separate thread), there will be a race condition between the thread that initiated the DOM editing, by making a call to the create_my_awsome_widget_in_DOM(), and the DOM rendering. If the rendering thread is "quick enough" to render the DOM before the JavasSript thread calls the edit_my_awsome_widget_by_editing_DOM_inside_it(), everything works fine, but if it's the other way around, then the JavaScript starts to modify region of the DOM that does not (yet) exist.
Essentially it means that due to the background DOM rendering the create_my_awsome_widget_in_DOM() and edit_my_awsome_widget_by_editing_DOM_inside_it() are executed in a random order and obviously the application crashes, if the edit_my_awsome_widget_by_editing_DOM_inside_it() is called before the create_my_awsome_widget_in_DOM().
There might be a way to do it indirectly. Here is the original presentation at RubyConf 2008. The topic:
This talk is about the many paths towards getting ruby running in your web browser. I'll first talk about why this is even a good idea. I'll then talk briefly about each approach I've investigated and the differing amounts of FAIL I encountered with each. Next I'll focus on the most promising contender, rubyjs, a ruby compiler which outputs javascript.
The project rubyjs still exists, but it appears to be dead. The idea probably was a little too crazy.
mruby seems like an interesting option for running ruby in a web browser:
http://qiezi.me/projects/mruby-web-irb/mruby.html
It's not a typical plugin as it does not require installation, it's javascript (compiled from C) running ruby code.
Technically that would be correct, assuming the browser/plugin also provided an extensive API to deal with the DOM and such. I am not aware of any plugins that make this possible, but it's an interesting idea.

severside processing vs client side processing + ajax?

looking for some general advice and/or thoughts...
i'm creating what i think to be more of a web application then web page, because i intend it to be like a gmail app where you would leave the page open all day long while getting updates "pushed" to the page (for the interested i'm using the comet programming technique). i've never created a web page before that was so rich in ajax and javascript (i am now a huge fan of jquery). because of this, time and time again when i'm implementing a new feature that requires a dynamic change in the UI that the server needs to know about, i am faced with the same question:
1) should i do all the processing on the client in javascript and post back as little as possible via ajax
or
2) should i post a request to the server via ajax, have the server do all the processing and then send back the new html. then on the ajax response i do a simple assignment with the new HTML
i have been inclined to always follow #1. this web app i imagine may get pretty chatty with all the ajax requests. my thought is minimize as much as possible the size of the requests and responses, and rely on the continuously improving javascript engines to do as much of the processing and UI updates as possible. i've discovered with jquery i can do so much on the client side that i wouldn't have been able to do very easily before. my javascript code is actually much bigger and more complex than my serverside code. there are also simple calulcations i need to perform and i've pushed that on the client side, too.
i guess the main question i have is, should we ALWAYS strive for client side processing over server side processing whenever possible? i 've always felt the less the server has to handle the better for scalability/performance. let the power of the client's processor do all the hard work (if possible).
thoughts?
There are several considerations when deciding if new HTML fragments created by an ajax request should be constructed on the server or client side. Some things to consider:
Performance. The work your server has to do is what you should be concerned with. By doing more of the processing on the client side, you reduce the amount of work the server does, and speed things up. If the server can send a small bit of JSON instead of giant HTML fragment, for example, it'd be much more efficient to let the client do it. In situations where it's a small amount of data being sent either way, the difference is probably negligible.
Readability. The disadvantage to generating markup in your JavaScript is that it's much harder to read and maintain the code. Embedding HTML in quoted strings is nasty to look at in a text editor with syntax coloring set to JavaScript and makes for more difficult editing.
Separation of data, presentation, and behavior. Along the lines of readability, having HTML fragments in your JavaScript doesn't make much sense for code organization. HTML templates should handle the markup and JavaScript should be left alone to handle the behavior of your application. The contents of an HTML fragment being inserted into a page is not relevant to your JavaScript code, just the fact that it's being inserted, where, and when.
I tend to lean more toward returning HTML fragments from the server when dealing with ajax responses, for the readability and code organization reasons I mention above. Of course, it all depends on how your application works, how processing intensive the ajax responses are, and how much traffic the app is getting. If the server is having to do significant work in generating these responses and is causing a bottleneck, then it may be more important to push the work to the client and forego other considerations.
I'm currently working on a pretty computationally-heavy application right now and I'm rendering almost all of it on the client-side. I don't know exactly what your application is going to be doing (more details would be great), but I'd say your application could probably do the same. Just make sure all of your security- and database-related code lies on the server-side, because not doing so will open security holes in your application. Here are some general guidelines that I follow:
Don't ever rely on the user having a super-fast browser or computer. Some people are using Internet Explore 7 on old machines, and if it's too slow for them, you're going to lose a lot of potential customers. Test on as many different browsers and machines as possible.
Any time you have some code that could potentially slow down or freeze the browser momentarily, show a feedback mechanism (in most cases a simple "Loading" message will do) to tell the user that something is indeed going on, and the browser didn't just randomly freeze.
Try to load as much as you can during initialization and cache everything. In my application, I'm doing something similar to Gmail: show a loading bar, load up everything that the application will ever need, and then give the user a smooth experience from there on out. Yes, they're going to have to potentially wait a couple seconds for it to load, but after that there should be no problems.
Minimize DOM manipulation. Raw number-crunching JavaScript performance might be "fast enough", but access to the DOM is still slow. Avoid creating and destroying elements; instead simply hide them if you don't need them at the moment.
I recently ran into the same problem and decided to go with browser side processing, everything worked great in FF and IE8 and IE8 in 7 mode, but then... our client, using Internet Explorer 7 ran into problems, the application would freeze up and a script timeout box would appear, I had put too much work into the solution to throw it away so I ended up spending an hour or so optimizing the script and adding setTimeout wherever possible.
My suggestions?
If possible, keep non-critical calculations client side.
To keep data transfers low, use JSON and let the client side sort out the HTML.
Test your script using the lowest common denominator.
If needed use the profiling feature in FireBug. Corollary: use the uncompressed (development) version of jQuery.
I agree with you. Push as much as possible to users, but not too much. If your app slows or even worse crashes their browser you loose.
My advice is to actually test how you application acts when turned on for all day. Check that there are no memory leaks. Check that there isn't a ajax request created every half of second after working with application for a while (timers in JS can be a pain sometime).
Apart from that never perform user input validation with javascript. Always duplicate it on server.
Edit
Use jquery live binding. It will save you a lot of time when rebinding generated content and will make your architecture more clear. Sadly when I was developing with jQuery it wasn't available yet; we used other tools with same effect.
In past I also had a problem when one page part generation using ajax depends on other part generation. Generating first part first and second part second will make your page slower as expected. Plan this in front. Develop a pages so that they already have all content when opened.
Also (regarding simple pages too), keep number of referenced files on one server low. Join javascript and css libraries into one file on server side. Keep images on separate host, better separate hosts (creating just a third level domain will do too). Though this is worth it only on production; it will make development process more difficult.
Of course it depends on the data, but a majority of the time if you can push it client side, do. Make the client do more of the processing and use less bandwidth. (Again this depends on the data, you can get into cases that you have to send more data across to do it client side).
Some stuff like security checks should always be done on the server. If you have a computation that takes a lot of data and produces less data, also put it on the server.
Incidentally, did you know you could run Javascript on the server side, rendering templates and hitting databases? Check out the CommonJS ecosystem.
There could also be cross-browser support issues. If you're using a cross-browser, client-side library (eg JQuery) and it can handle all the processing you need then you can let the library take care of it. Generating cross-browser HTML server-side can be harder (tends to be more manual), depending on the complexity of the markup.
this is possible, but with the heavy intial page load && heavy use of caching. take gmail as an example
On initial page load, it downloads most of the js files it needed to run. And most of all cached.
dont over use of images and graphics.
Load all the data need to show in intial load and along with the subsequent predictable user data. in gmail & latest yahoo mail the inbox is not only populated with the single mail conversation body, It loads first few full email messages in advance at the time of pageload. secret of high resposiveness comes with the cost (gmail asks to load the light version if the bandwidth is low.i bet most of us have experienced ).
follow KISS principle. means keep ur desgin simple.
And never try to render the whole page using javascript in any case, you cannot predict all your endusers using the high config systems or high bandwidth systems.
Its smart to split the workload between your server and client.
If you think in the future you might want to create an API for your application (communicating with iPhone or android apps, letting other sites integrate with yours,) your would have to duplicate a bunch of code for all those devices if you go with a bare-bones server implementation of your application.

When should I use Inline vs. External Javascript?

I would like to know when I should include external scripts or write them inline with the html code, in terms of performance and ease of maintenance.
What is the general practice for this?
Real-world-scenario - I have several html pages that need client-side form validation. For this I use a jQuery plugin that I include on all these pages. But the question is, do I:
write the bits of code that configure this script inline?
include all bits in one file that's share among all these html pages?
include each bit in a separate external file, one for each html page?
Thanks.
At the time this answer was originally posted (2008), the rule was simple: All script should be external. Both for maintenance and performance.
(Why performance? Because if the code is separate, it can easier be cached by browsers.)
JavaScript doesn't belong in the HTML code and if it contains special characters (such as <, >) it even creates problems.
Nowadays, web scalability has changed. Reducing the number of requests has become a valid consideration due to the latency of making multiple HTTP requests. This makes the answer more complex: in most cases, having JavaScript external is still recommended. But for certain cases, especially very small pieces of code, inlining them into the site’s HTML makes sense.
Maintainability is definitely a reason to keep them external, but if the configuration is a one-liner (or in general shorter than the HTTP overhead you would get for making those files external) it's performance-wise better to keep them inline. Always remember, that each HTTP request generates some overhead in terms of execution time and traffic.
Naturally this all becomes irrelevant the moment your code is longer than a couple of lines and is not really specific to one single page. The moment you want to be able to reuse that code, make it external. If you don't, look at its size and decide then.
If you only care about performance, most of advice in this thread is flat out wrong, and is becoming more and more wrong in the SPA era, where we can assume that the page is useless without the JS code. I've spent countless hours optimizing SPA page load times, and verifying these results with different browsers. Across the board the performance increase by re-orchestrating your html, can be quite dramatic.
To get the best performance, you have to think of pages as two-stage rockets. These two stages roughly correspond to <head> and <body> phases, but think of them instead as <static> and <dynamic>. The static portion is basically a string constant which you shove down the response pipe as fast as you possibly can. This can be a little tricky if you use a lot of middleware that sets cookies (these need to be set before sending http content), but in principle it's just flushing the response buffer, hopefully before jumping into some templating code (razor, php, etc) on the server. This may sound difficult, but then I'm just explaining it wrong, because it's near trivial. As you may have guessed, this static portion should contain all javascript inlined and minified. It would look something like
<!DOCTYPE html>
<html>
<head>
<script>/*...inlined jquery, angular, your code*/</script>
<style>/* ditto css */</style>
</head>
<body>
<!-- inline all your templates, if applicable -->
<script type='template-mime' id='1'></script>
<script type='template-mime' id='2'></script>
<script type='template-mime' id='3'></script>
Since it costs you next to nothing to send this portion down the wire, you can expect that the client will start receiving this somewhere around 5ms + latency after connecting to your server. Assuming the server is reasonably close this latency could be between 20ms to 60ms. Browsers will start processing this section as soon as they get it, and the processing time will normally dominate transfer time by factor 20 or more, which is now your amortized window for server-side processing of the <dynamic> portion.
It takes about 50ms for the browser (chrome, rest maybe 20% slower) to process inline jquery + signalr + angular + ng animate + ng touch + ng routes + lodash. That's pretty amazing in and of itself. Most web apps have less code than all those popular libraries put together, but let's say you have just as much, so we would win latency+100ms of processing on the client (this latency win comes from the second transfer chunk). By the time the second chunk arrives, we've processed all js code and templates and we can start executing dom transforms.
You may object that this method is orthogonal to the inlining concept, but it isn't. If you, instead of inlining, link to cdns or your own servers the browser would have to open another connection(s) and delay execution. Since this execution is basically free (as the server side is talking to the database) it must be clear that all of these jumps would cost more than doing no jumps at all. If there were a browser quirk that said external js executes faster we could measure which factor dominates. My measurements indicate that extra requests kill performance at this stage.
I work a lot with optimization of SPA apps. It's common for people to think that data volume is a big deal, while in truth latency, and execution often dominate. The minified libraries I listed add up to 300kb of data, and that's just 68 kb gzipped, or 200ms download on a 2mbit 3g/4g phone, which is exactly the latency it would take on the same phone to check IF it had the same data in its cache already, even if it was proxy cached, because the mobile latency tax (phone-to-tower-latency) still applies. Meanwhile, desktop connections that have lower first-hop latency typically have higher bandwidth anyway.
In short, right now (2014), it's best to inline all scripts, styles and templates.
EDIT (MAY 2016)
As JS applications continue to grow, and some of my payloads now stack up to 3+ megabytes of minified code, it's becoming obvious that at the very least common libraries should no longer be inlined.
Externalizing javascript is one of the yahoo performance rules:
http://developer.yahoo.com/performance/rules.html#external
While the hard-and-fast rule that you should always externalize scripts will generally be a good bet, in some cases you may want to inline some of the scripts and styles. You should however only inline things that you know will improve performance (because you've measured this).
i think the specific to one page, short script case is (only) defensible case for inline script
Actually, there's a pretty solid case to use inline javascript. If the js is small enough (one-liner), I tend to prefer the javascript inline because of two factors:
Locality. There's no need to navigate an external file to validate the behaviour of some javascript
AJAX. If you're refreshing some section of the page via AJAX, you may lose all of your DOM handlers (onclick, etc) for that section, depending on how you binded them. For example, using jQuery you can either use the live or delegate methods to circumvent this, but I find that if the js is small enough it is preferrable to just put it inline.
Another reason why you should always use external scripts is for easier transition to Content Security Policy (CSP). CSP defaults forbid all inline script, making your site more resistant to XSS attacks.
I would take a look at the required code and divide it into as many separate files as needed. Every js file would only hold one "logical set" of functions etc. eg. one file for all login related functions.
Then during site developement on each html page you only include those that are needed.
When you go live with your site you can optimize by combining every js file a page needs into one file.
The only defense I can offer for inline javascipt is that when using strongly typed views with .net MVC you can refer to c# variables mid javascript which I've found useful.
On the point of keeping JavaScript external:
ASP.NET 3.5SP1 recently introduced functionality to create a Composite script resource (merge a bunch of js files into one). Another benefit to this is when Webserver compression is turned on, downloading one slightly larger file will have a better compression ratio then many smaller files (also less http overhead, roundtrip etc...). I guess this saves on the initial page load, then browser caching kicks in as mentioned above.
ASP.NET aside, this screencast explains the benefits in more detail:
http://www.asp.net/learn/3.5-SP1/video-296.aspx
Three considerations:
How much code do you need (sometimes libraries are a first-class consumer)?
Specificity: is this code only functional in the context of this specific document or element?
Every code inside the document tends to make it longer and thus slower. Besides that SEO considerations make it obvious, that you minimize internal scripting ...
External scripts are also easier to debug using Firebug. I like to Unit Test my JavaScript and having it all external helps. I hate seeing JavaScript in PHP code and HTML it looks like a big mess to me.
Another hidden benefit of external scripts is that you can easily run them through a syntax checker like jslint. That can save you from a lot of heartbreaking, hard-to-find, IE6 bugs.
In your scenario it sounds like writing the external stuff in one file shared among the pages would be good for you. I agree with everything said above.
During early prototyping keep your code inline for the benefit of fast iteration, but be sure to make it all external by the time you reach production.
I'd even dare to say that if you can't place all your Javascript externally, then you have a bad design under your hands, and you should refactor your data and scripts
Google has included load times into it's page ranking measurements, if you inline a lot, it will take longer for the spiders to crawl thru your page, this may be influence your page ranking if you have to much included. in any case different strategies may have influence on your ranking.
well I think that you should use inline when making single page websites as scripts will not need to be shared across multiple pages
Having internal JS pros:
It's easier to manage & debug
You can see what's happening
Internal JS cons:
People can change it around, which really can annoy you.
external JS pros:
no changing around
you can look more professional (or at least that's what I think)
external JS cons:
harder to manage
its hard to know what's going on.
Always try to use external Js as inline js is always difficult to maintain.
Moreover, it is professionally required that you use an external js since majority of the developers recommend using js externally.
I myself use external js.

jQuery AJAX vs. UpdatePanel

We've got a page with a ton of jQuery (approximately 2000 lines) that we want to trim down b/c it is a maintenance nightmare, and it might be easier to maintain on the server. We've thought about using UpdatePanel for this. However, we don't like the fact that the UpdatePanel is sending the whole page back to the server.
Don't move to UpdatePanels. After coming from jQuery, the drop in performance would be untenable. Especially on a page as complex as yours sounds.
If you have 2,000 lines of JavaScript code, the solution is to refactor that code. If you put 2,000 lines of C# code in one file, it would be difficult to maintain too. That would be difficult to manage effectively with any language or tool.
If you're using 3.5 SP1, you can use the ScriptManager's new script combining to separate your JavaScript into multiple files with no penalty. That way, you can logically partition your code just as you would with server side code.
Please don't put your self in that world of pain. Instead use UFRAME which is a lot faster and is implemented in jQuery.
Now, to manage those 2000 lines of Javascript code I recommend splitting the code in different files and set up your build process to join them using JSMin or Yahoo Compressor into chunks.
I don't know if there is a way to optimize UpdatePanels, but my company has found its performance to be pretty poor. jQuery is much much faster at doing pretty much anything.
There can be a lot of lag between the time when an UpdatePanel triggers an update and when the UpdatePanel actually updates the page.
The only reason we use UpdatePanels is because of the ease of development. Almost nothing needs to be done to make them work.
Using UpdatePanel force you to use ScriptManager that added tons of scripts in your webpages.
UpdatePanel provides you partial postback and not real ajax.
If your will run only on a LAN and not internet that's ok, but if your target is internet try refractoring your codes and compress them with some tools before publish on the website

Categories

Resources