Techniques or Data Structures For Speeding Up Javascript? - javascript

This might be a bit vague.
I'm working on an Atari 2600 emulator in javascript (http://jsatari.com/demo/ and https://github.com/docmarionum1/jsAtari) and it just runs incredibly slow. In chrome around 15-20 FPS and in Firefox around 2-3 FPS (on my computer).
I've run through my code optimizing with Chrome's and Firebug's profilers and optimized whatever I could, but I'm FAR from what I need and I don't see much more room for improvement in my algorithms. (Or, at least not without significantly diverging from the original hardware implementation.)
And so far my biggest improvements haven't come from improving the algorithms, but from changing my data structures:
Switching my large arrays (thousands of elements) to Typed Arrays provided the biggest boost in performance. Firefox would freeze before the switch, and Chrome ran about 10x faster.
I replaced some smaller arrays with individual variables and switch statements, also providing a significant boost in performance.
So, it seems pretty clear that arrays are incredibly slow.
In general, performance just seems very finicky, with small changes in my code resulting in large changes to the performance (for better or worse.) Or there other oddities that could be affecting the performance?
For instance, are objects created with object literal notation represented differently by the engine? I've seen noticeable changes in performance when merely adding or removing variables from an object, even if they weren't being used. Should the number of variables affect that?
Are there any other new developments in javascript (like Typed Arrays) that could have a big affect on performance?
And, finally, is there any good way to track performance due to intangibles like these? The profilers don't seem to help because the entire script with change, not just certain parts.

I saw, that you create many closures (and directly execute them) for example in your UpdatePos method. The massive creation usage of closures as you do it may be a big performance problem.
I would recomment you to take a look at JavaScript optimization tools like Closure Compiler by Google http://closure-compiler.appspot.com/home I really can recomment using this with Advanced Optimization (but then you have to give him all you javascript code - otherwise or if you use eval you (might) get problems (because he renames not only local variables and deletes unused code))

Related

How to introspect elements kind in an array in V8

After reading this article: https://v8.dev/blog/elements-kinds. I was wondering if some of the arrays that are created in my code are packed or holey. Is there any way to check this at runtime in a browser environment(specifically Chrome)?
I am looking for something like a %DebugPrint() that works on Chrome.
Maybe there are some other clues that I can look for. For example, some info that is captured in heap snapshots, etc.
(V8 developer here.)
As far as I'm aware, Chrome (DevTools) does not expose arrays' "elements kind".
And that's probably okay, because the performance difference between "packed" and "holey" elements is very small. Essentially, it only shows up in microbenchmarks. The typical benefit of a packed array is that two machine instructions can be avoided that would otherwise have to happen. If you're looking at a one-line hot loop body that compiles to maybe a dozen machine instructions, then saving two of them can be measurable. If you're looking at a real-world app where hundreds of kilobytes of code contribute to overall performance, saving two machine instructions here and there isn't going to matter.
I understand that you may be curious; but as far as tangible optimizations go, there are much more impactful ways to spend your time. Talking about elements kinds is pretty much a "for your curiosity, here are some of the lengths to which JS engines go to squeeze every last bit of performance out of JavaScript" story; it is not meant to be important performance advice that most developers should be aware of.

Is JavaScript standby functionality stored in RAM or the hard drive?

This is a bit of a strange question where I do not know enough to possibly ask it correctly, so I will do my best (googling for a worthwhile result has proven difficult):
You write a Javascript program
V8 (or other interpreters) compiles your script (I understand WHICH interpreter is running vastly changes the results of the answer to this, so let's stick with V8)
Your Javascript could be a formidably large footprint of executable code
Does V8 keep any routines that are not in use on the hard drive? Or does the Javascript interpreted commands stay completely in RAM?
I was wondering this because it would seem unfortunate for a massive JS program to eat into the available RAM allotment a browser gives if the complexity of the JS program was overtly large.
I know this gets into: if you have such a huge program you're doing it wrong, but I like to push things where I can and if I get a better understanding of how this all works, I can make better decisions :)
(V8 developer here.) Short answer: no, V8 does not swap any unused things (code or otherwise) to disk at runtime.
Executable code is typically not the biggest consumer of memory we see in V8, it tends to be dwarfed by (non-code) data. Still, the amount of code can certainly be significant; one of the reasons why V8 switched its first (unoptimized) execution tier from a compiler to an interpreter a few years ago was because that interpreter's bytecode is much smaller than the earlier non-optimizing compiler's machine code it replaced. When a function is optimized, it's still compiled to machine code; but since typically only relatively few functions get optimized, that usually only takes up a low single-digit percentage of overall memory.
In embedders that support it (like Chrome), V8 does support caching certain things between runs, including code. That's a performance optimization: some work can be avoided if you visit the same website multiple times. This mechanism doesn't temporarily free up any memory, and it's not supposed to.
Generally, it's not the job of individual applications to swap things to disk -- it's the job of the operating system. When all running applications combined use more memory than is available, then the kernel will pick some "pages" (chunks of memory) and write them to disk. Applications don't notice any of that (except a massive loss of performance when those pages are needed again) and don't have to do any work to support it. So this part of the answer applies not just to V8, but also to other JavaScript engines, and in general to all other programs you may run on your computer.

JavaScript instance vs prototype methods and heap snapshot data using Chrome Dev Tools

I have a heap profile taken in Chrome Dev Tools, but I am not sure how to interpret the data. As a test, I created 10,000 WidgetBuilder objects, each with their own methods. I would like to profile out storing methods on instances versus the prototype and see how that affects memory and performance when my page loads.
Should I focus on Retained Size or Shallow Size?
Are the values listed in these columns in bytes?
What is considered a lot of memory?
You might want to start here:
https://developers.google.com/chrome-developer-tools/docs/heap-profiling
It goes into detail on how to understand what you're reading. As for what is considered a lot of memory that's a tricky question. If your website is targeted at mobile devices I would start there as a constraint. To come up with a good comparison I'd suggest running the profiler against sites that you use every day and observe the memory consumption there.
If you find you're using more memory than gmail you might want to rethink ;)
I also recommend checking out jspref:
http://jsperf.com/prototype-vs-instance-functions
There is a LOT of prior work done on that site in regards to performance testing. You might be able to save yourself some time.

Javascript acceleration?

Is there any way to speed up JS scripts (i mean some complex DOM manipulations, like games or animations)?
Theres really no way to really speed it up. You can compact it but it won't be much faster.
Use the V8 javascript engine? :P
The only way you can do that is to reduce the amount of dom and scope access in your code.
e.g. when accessing an element in the dom multiple times, instead of
document.something.mydiv.myelement.property1 = this
document.something.mydiv.myelement.property2 = bla
document.something.mydiv.myelement.property3 = foo
document.something.mydiv.myelement.property4 = bar
do
var myel = document.something.mydiv.myelement
myel.property1 = this
myel.property2 = bla
myel.property4 = bar
myel.property3 = foo
Note that some javascript engines will automatically make this optimization for you, but for older browsers, the latter will definitely be faster than the former because it only has to go through the access chain once to reach myel
Have a look at this video for more javascript optmization techniques
http://www.youtube.com/watch?v=mHtdZgou0qU
If you mean outside the browser then you should use the fastest around, i.e. Chrome's V8 Java Script engine.
Inside the browser there is a wealth of optimization techniques to faster loading Java Script, here is a good place for optimization techniques by google.
Compile your Java Script using a tool like YUI compressor than serve it gzipped.
Only load the bare minimum you need
Complex animations are still best served by Rich UI plugins, i.e. Flash/Silverlight
For animations look at using the HTML 5 Canvas element for browsers that support it, fall back to flash for ones that don't.
Google maps is a good case of what's possible with pure Java Script although they've spent a wealth resources optimizing the performance for each browser. As always the best way to improve your speed is to benchmark different approaches. e.g. div.innerHTML ="", is most of the times quicker than using the DOM to dynamically add elements, etc.
The best you can do is optimize your code. Use a profiler -- for Firefox there's Firebug, Safari and Windows IE 8 have JavaScript debuggers and profilers built in (At least I believe IE 8 does, someone correct me if I'm wrong...). Running a profile on your code will show you where the slowest parts are, and those are the sections you can focus on optimizing... perhaps with more questions that are a lot more specific.
That's a very vague question. There are a million things you can do to speed up your code (Ajaxian has 157 articles on the topic at the time of this writing), but there is no "Make Me Faster" button that magically makes all scripts run faster. If there were, life would be so much easier.
The closure project from Google makes some claims along those lines, although I haven't tried it personally.
The Closure Compiler compiles JavaScript into compact, high-performance code. The compiler removes dead code and rewrites and minimizes what's left so that it downloads and runs quickly. It also also checks syntax, variable references, and types, and warns about common JavaScript pitfalls.
Try to make animation and display changes to positioned or 'offscreen' elements- redraw the page the fewest number of times.
Make multiple style changes by changing the cssText or the className, not one property at a time.
If you need to lookup an element or property twice in the same process, you should have made a local reference the first time.
And remember to turn off the debugger, if you are not debugging.

How do the various Javascript optimization projects affect DOM performance?

There's a lot of capital C, capital S computer science going into Javascript via the Tracemonkey, Squirrelfish, and V8 projects. Do any of these projects (or others) address the performance of DOM operations, or are they purely Javascript computation related?
The performance of pure DOM operations (getElementById/Tagname/Selector, nextChild, etc) are unaffected as they're already in pure C++.
How the JS engine improvements will effect performance does depend to an extent on the particular techniques used for the performance improvements, as well as the performance of the DOM->JS bridge.
An example of the former is TraceMonkey's dependence on all calls being to JS functions. Because a trace effectively inlines the path of execution any point where the JS hits code that cannot be inlined (native code, true polymorphic recursion, exception handlers) the trace is aborted and execution falls back to the interpreter. The TM developers are doing quite a lot of work to improve the amount of code that can be traced (including handling polymorphic recursion) however realistically tracing across calls to arbitrary native functions (eg. the DOM) isn't feasible. For that reason I believe they are looking at implementing more of the DOM in JS (or at least in a JS friendly manner). That said, when code is traceable TM can do an exceptionally good job as it can lower most "objects" to more efficient and/or native equivalents (eg. use machine ints instead of the JS Number implementation).
JavaScriptCore (which is where SquirrelFish Extreme lives) and V8 have a more similar approach in that they both JIT all JS code immediately and produce code that is more speculative (eg. if you are doing a*b they generate code that assumes a and b are numbers and falls back to exceptionally slow code if they aren't). This has a number of benefits over tracing, namely that you can jit all code, regardless as to whether or not it calls native code/throws exceptions, etc, which means a single DOM call won't destroy performance. The downside is that all code is speculative -- TM will inline calls to Math.floor, etc, but the best JSC/V8 can do would be equivalent to a=Math.floor(0.5) -> a=(Math.floor == realFloor) ? inline : Math.floor(0.5) this has costs both in performance and memory usage, it also isn't particularly feasible. The reason for this is the up front compilation, whereas TM only JITs code after it's run (and so knows exactly what function was called) JSC and V8 have no real basis to make such an assumption and basically have to guess (and currently neither attempts this). The one thing that V8 and JSC do to try and compensate for this problem is to track what they've seen in the past and incorporate that into the path of execution, both use a combination of techniques to do this caching, in especially hot cases they rewrite small portions of the instruction stream, and in other cases they keep out of band caches. Broadly speaking if you have code that goes
a.x * a.y
V8 and JSC will check the 'implicit type'/'Structure' twice -- once for each access, and then check that a.x and a.y are both numbers, whereas TM will generate code that checks the type of a only once, and can (all things being equal) just multiply a.x and a.y without checking that they're numbers.
If you're looking at pure execution speed currently there's something of a mixed bag as each engine does appear to do better at certain tasks than others -- TraceMonkey wins in many pure maths tests, V8 wins in heavily dynamic cases, JSC wins if there's a mix. Of course while that's true today it may not be tomorrow as we're all working hard to improve performance.
The other issue i mentioned was the DOM<->JS binding cost -- this can actually play a very significant part in web performance, the best example of this is Safari 3.1/2 vs Chrome at the Dromaeo benchmark. Chrome is based off of the Safari 3.1/2 branch of WebKit so it's reasonably safe to assume similar DOM performance (compiler difference could cause some degree of variance). In this benchmark Safari 3.1/2 actually beats Chrome despite having a JS engine that is clearly much much slower, this is basically due to more efficient bindings between JSC/WebCore (the dom/rendering/etc of WebKit) and V8/WebCore
Currently looking at TM's DOM bindings seems unfair as they haven't completed all the work they want to do (alas) so they just fall back on the interpreter :-(
..
Errmmm, that went on somewhat longer than intended, so short answer to the original question is "it depends" :D
They're pure JavaScript. Unless a particular DOM method call is implemented in JS, they'll have little effect (not to say there hasn't been work done on reducing the overhead of such calls however).
DOM optimization is a whole 'nother kettle of squirrels monkeys spiders fish... The layout and even rendering engines come into play, and each browser has their own implementation and optimization strategy.

Categories

Resources