How expensive are power operations in Javascript? - javascript

Does it take a lot of CPU cycles to raise 2 to the power of, say, 29? Multiple times each second?
Or is it worth creating a cache of power variables?

Questions about performance in JavaScript don't usually have a single answer, for the simple reason that each engine is different. What's expensive in one engine (say, Microsoft's JScript) is cheap in another (say, Google's V8).
So a two-part answer:
Don't worry about it until/unless you have a real-world performance issue related to power operations. Until then, it's just a waste to spend time on it.
If that happens, profile the performance of alternatives on the engines you intend to support. If this is a browser-based thing you're working on, http://jsperf.com can be useful there, and all modern browsers have development tools, some with pretty decent profilers.

Modern JavaScript run times do this sort of thing fast!
JavaScript can perform (in Google's v8, SpiderMonkey, JavaScriptCode and in IE10) over 44 million 'two to the power of 29' operations in a second.
Here is a perf
Whenever you run into this sort of question you should profile your code.

Related

Is JavaScript standby functionality stored in RAM or the hard drive?

This is a bit of a strange question where I do not know enough to possibly ask it correctly, so I will do my best (googling for a worthwhile result has proven difficult):
You write a Javascript program
V8 (or other interpreters) compiles your script (I understand WHICH interpreter is running vastly changes the results of the answer to this, so let's stick with V8)
Your Javascript could be a formidably large footprint of executable code
Does V8 keep any routines that are not in use on the hard drive? Or does the Javascript interpreted commands stay completely in RAM?
I was wondering this because it would seem unfortunate for a massive JS program to eat into the available RAM allotment a browser gives if the complexity of the JS program was overtly large.
I know this gets into: if you have such a huge program you're doing it wrong, but I like to push things where I can and if I get a better understanding of how this all works, I can make better decisions :)
(V8 developer here.) Short answer: no, V8 does not swap any unused things (code or otherwise) to disk at runtime.
Executable code is typically not the biggest consumer of memory we see in V8, it tends to be dwarfed by (non-code) data. Still, the amount of code can certainly be significant; one of the reasons why V8 switched its first (unoptimized) execution tier from a compiler to an interpreter a few years ago was because that interpreter's bytecode is much smaller than the earlier non-optimizing compiler's machine code it replaced. When a function is optimized, it's still compiled to machine code; but since typically only relatively few functions get optimized, that usually only takes up a low single-digit percentage of overall memory.
In embedders that support it (like Chrome), V8 does support caching certain things between runs, including code. That's a performance optimization: some work can be avoided if you visit the same website multiple times. This mechanism doesn't temporarily free up any memory, and it's not supposed to.
Generally, it's not the job of individual applications to swap things to disk -- it's the job of the operating system. When all running applications combined use more memory than is available, then the kernel will pick some "pages" (chunks of memory) and write them to disk. Applications don't notice any of that (except a massive loss of performance when those pages are needed again) and don't have to do any work to support it. So this part of the answer applies not just to V8, but also to other JavaScript engines, and in general to all other programs you may run on your computer.

What blocks Ruby, Python to get Javascript V8 speed? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 8 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Are there any Ruby / Python features that are blocking implementation of optimizations (e.g. inline caching) V8 engine has?
Python is co-developed by Google guys so it shouldn't be blocked by software patents.
Or this is rather matter of resources put into the V8 project by Google.
What blocks Ruby, Python to get Javascript V8 speed?
Nothing.
Well, okay: money. (And time, people, resources, but if you have money, you can buy those.)
V8 has a team of brilliant, highly-specialized, highly-experienced (and thus highly-paid) engineers working on it, that have decades of experience (I'm talking individually – collectively it's more like centuries) in creating high-performance execution engines for dynamic OO languages. They are basically the same people who also created the Sun HotSpot JVM (among many others).
Lars Bak, the lead developer, has been literally working on VMs for 25 years (and all of those VMs have lead up to V8), which is basically his entire (professional) life. Some of the people writing Ruby VMs aren't even 25 years old.
Are there any Ruby / Python features that are blocking implementation of optimizations (e.g. inline caching) V8 engine has?
Given that at least IronRuby, JRuby, MagLev, MacRuby and Rubinius have either monomorphic (IronRuby) or polymorphic inline caching, the answer is obviously no.
Modern Ruby implementations already do a great deal of optimizations. For example, for certain operations, Rubinius's Hash class is faster than YARV's. Now, this doesn't sound terribly exciting until you realize that Rubinius's Hash class is implemented in 100% pure Ruby, while YARV's is implemented in 100% hand-optimized C.
So, at least in some cases, Rubinius can generate better code than GCC!
Or this is rather matter of resources put into the V8 project by Google.
Yes. Not just Google. The lineage of V8's source code is 25 years old now. The people who are working on V8 also created the Self VM (to this day one of the fastest dynamic OO language execution engines ever created), the Animorphic Smalltalk VM (to this day one of the fastest Smalltalk execution engines ever created), the HotSpot JVM (the fastest JVM ever created, probably the fastest VM period) and OOVM (one of the most efficient Smalltalk VMs ever created).
In fact, Lars Bak, the lead developer of V8, worked on every single one of those, plus a few others.
There's a lot more impetus to highly optimize JavaScript interpretors which is why we see so many resources being put into them between Mozilla, Google, and Microsoft. JavaScript has to be downloaded, parsed, compiled, and run in real time while a (usually impatient) human being is waiting for it, it has to run WHILE a person is interacting with it, and it's doing this in an uncontrolled client-end environment that could be a computer, a phone, or a toaster. It HAS to be efficient in order to run under these conditions effectively.
Python and Ruby are run in an environment controlled by the developer/deployer. A beefy server or desktop system generally where the limiting factor will be things like memory or disk I/O and not execution time. Or where non-engine optimizations like caching can be utilized. For these languages it probably does make more sense to focus on language and library feature set over speed optimization.
The side benefit of this is that we have two great high performance open source JavaScript engines that can and are being re-purposed for all manner of applications such as Node.js.
A good part of it has to do with community. Python and Ruby for the most part have no corporate backing. No one gets paid to work on Python and Ruby full-time (and they especially don't get paid to work on CPython or MRI the whole time). V8, on the other hand, is backed by the most powerful IT company in the world.
Furthermore, V8 can be faster because the only thing that matters to the V8 people is the interpreter -- they have no standard library to work on, no concerns about language design. They just write the interpreter. That's it.
It has nothing to do with intellectual property law. Nor is Python co-developed by Google guys (its creator works there along with a few other committers, but they don't get paid to work on Python).
Another obstacle to Python speed is Python 3. Its adoption seems to be the main concern of the language developers -- to the point that they have frozen development of new language features until other implementations catch up.
On to the technical details, I don't know much about Ruby, but Python has a number of places where optimizations could be used (and Unladen Swallow, a Google project, started to implement these before biting the dust). Here are some of the optimizations that they planned. I could see Python gaining V8 speed in the future if a JIT a la PyPy gets implemented for CPython, but that does not seem likely for the coming years (the focus right now is Python 3 adoption, not a JIT).
Many also feel that Ruby and Python could benefit immensely from removing their respective global interpreter locks.
You also have to understand that Python and Ruby are both much heavier languages than JS -- they provide far more in the way of standard library, language features, and structure. The class system of object-orientation alone adds a great deal of weight (in a good way, I think). I almost think of Javascript as a language designed to be embedded, like Lua (and in many ways, they are similar). Ruby and Python have a much richer set of features, and that expressiveness is usually going to come at the cost of speed.
Performance doesn't seem to be a major focus of the core Python developers, who seem to feel that "fast enough" is good enough, and that features that help programmers be more productive are more important than features that help computers run code faster.
Indeed, however, there was a (now abandoned) Google project, unladen-swallow, to produce a faster Python interpreter compatible with the standard interpreter. PyPy is another project that intends to produce a faster Python. There is also Psyco, the forerunner of PyPy, which can provide performance boosts to many Python scripts without changing out the whole interpreter, and Cython, which lets you write high-performance C libraries for Python using something very much like Python syntax.
Misleading question. V8 is a JIT (a just in time compiler) implementation of JavaScript and in its most popular non-browser implementation Node.js it is constructed around an event loop. CPython is not a JIT & not evented. But these exist in Python most commonly in the PyPy project - a CPython 2.7 (and soon to be 3.0+) compatible JIT. And there are loads of evented server libraries like Tornado for example. Real world tests exist between PyPy running Tornado vs Node.js and the performance differences are slight.
I just ran across this question and there is also a big technical reason for the performance difference that wasn't mentioned. Python has a very large ecosystem of powerful software extensions, but most of these extensions are written in C or other low-level languages for performance and are heavily tied to the CPython API.
There are lots of well-known techniques (JIT, modern garbage collector, etc) that could be used to speed up the CPython implementation but all would require substantial changes to the API, breaking most of the extensions in the process. CPython would be faster, but a lot of what makes Python so attractive (the extensive software stack) would be lost. Case in point, there are several faster Python implementations out there but they have little traction compared to CPython.
Because of different design priorities and use case goals I believe.
In general main purpose of scripting (a.k.a. dynamic) languages is to be a "glue" between calls of native functions. And these native functions shall a) cover most critical/frequently used areas and b) be as effective as possible.
Here is an example:
jQuery sort causing iOS Safari to freeze
The freeze there is caused by excessive use of get-by-selector calls. If get-by-selector would be implemented in native code and effectively it will be no such problem at all.
Consider ray-tracer demo that is frequently used demo for V8 demonstration. In Python world it can be implemented in native code as Python provides all facilities for native extensions. But in V8 realm (client side sandbox) you have no other options rather than making VM to be [sub]effective as possible. And so the only option see ray-tracer implementation there is by using script code.
So different priorities and motivations.
In Sciter I've made a test by implementing pretty much full jQurey core natively. On practical tasks like ScIDE (IDE made of HTML/CSS/Script) I believe such solution works significantly better then any VM optimizations.
As other people have mentioned, Python has a performant JIT compiler in the form of PyPy.
Making meaningful benchmarks is always subtle, but I happen to have a simple benchmark of K-means written in different languages - you can find it here. One of the constraints was that the various languages should all implement the same algorithm and should strive to be simple and idiomatic (as opposed to optimized for speed). I have written all the implementations, so I know I have not cheated, although I cannot claim for all languages that what I have written is idiomatic (I only have a passing knowledge of some of those).
I do not claim any definitive conclusion, but PyPy was among the fastest implementations I got, far better than Node. CPython, instead, was at the slowest end of the ranking.
The statement is not exactly true
Just like V8 is just an implementation for JS, CPython is just one implementation for Python. Pypy has performances matching V8's.
Also, there the problem of perceived performance : since V8 is natively non blocking, Web dev leads to more performant projects because you save the IO wait. And V8 is mainly used for dev Web where IO is key, so they compare it to similar projects. But you can use Python in many, many other areas than web dev. And you can even use C extensions for a lot of tasks, such as scientific computations or encryption, and crunch data with blazing perfs.
But on the web, most popular Python and Ruby projects are blocking. Python, especially, has the legacy of the synchronous WSGI standard, and frameworks like the famous Django are based on it.
You can write asynchronous Python (like with Twisted, Tornado, gevent or asyncio) or Ruby. But it's not done often. The best tools are still blocking.
However, they are some reasons for why the default implementations in Ruby and Python are not as speedy as V8.
Experience
Like Jörg W Mittag pointed out, the guys working on V8 are VM geniuses. Python is dev by a bunch a passionate people, very good in a lot of domains, but are not as specialized in VM tuning.
Resources
The Python Software foundation has very little money : less than 40k in a year to invest in Python. This is kinda crazy when you think big players such as Google, Facebook or Apple are all using Python, but it's the ugly truth : most work is done for free. The language that powers Youtube and existed before Java has been handcrafted by volunteers.
They are smart and dedicated volunteers, but when they identify they need more juice in a field, they can't ask for 300k to hire a top notch specialist for this area of expertise. They have to look around for somebody who would do it for free.
While this works, it means you have to be very a careful about your priorities. Hence, now we need to look at :
Objectives
Even with the latest modern features, writing Javascript is terrible. You have scoping issues, very few collections, terrible string and array manipulation, almost no stdlist apart from date, maths and regexes, and no syntactic sugar even for very common operations.
But in V8, you've got speed.
This is because, speed was the main objective for Google, since it's a bottleneck for page rendering in Chrome.
In Python, usability is the main objective. Because it's almost never the bottleneck on the project. The scarce resource here is developer time. It's optimized for the developer.
Because JavaScript implementations need not care about backwards compatibility of their bindings.
Until recently the only users of the JavaScript implementations have been web browsers. Due to security requirements, only the web browser vendors had the privilege to extend the functionality by writing bindings to the runtimes. Thus there was no need keep the C API of the bindings backwards compatible, it was permissible to request the web browser developers update their source code as the JavaScript runtimes evolved; they were working together anyways. Even V8, which was a latecomer to the game, and also lead by a very very experienced developer, have changed the API as it became better.
OTOH Ruby is used (mainly) on the server-side. Many popular ruby extensions are written as C bindings (consider an RDBMS driver). In other words, Ruby would have never succeeded without maintaining the compatibility.
Today, the difference still exist to some extent. Developers using node.js are complaining that it is hard to keep their native extensions backwards compatible, as V8 changes the API over time (and that is one of the reasons node.js has been forked). IIRC ruby is still taking a much more conservative approach in this respect.
V8 is fast due to the JIT, Crankshaft, the type inferencer and data-optimized code. Tagged pointers, NaN-tagging of doubles.
And of course it does normal compiler optimizations in the middle.
The plain ruby, python and perl engines don't do neither of the those, just minor basic optimizations.
The only major vm which comes close is luajit, which doesn't even do type inference, constant folding, NaN-tagging nor integers, but uses similar small code and data structures, not as fat as the bad languages.
And my prototype dynamic languages, potion and p2 have similar features as luajit, and outperform v8. With an optional type system, "gradual typing", you could easily outperform v8, as you can bypass crankshaft. See dart.
The known optimized backends, like pypy or jruby still suffer from various over-engineering techniques.

What's the best way to determine at runtime if a browser is too slow to gracefully handle complex JavaScript/CSS?

I'm toying with the idea of progressively enabling/disabling JavaScript (and CSS) effects on a page - depending on how fast/slow the browser seems to be.
I'm specifically thinking about low-powered mobile devices and old desktop computers -- not just IE6 :-)
Are there any examples of this sort of thing being done?
What would be the best ways to measure this - accounting for things, like temporary slowdowns on busy CPUs?
Notes:
I'm not interested in browser/OS detection.
At the moment, I'm not interested in bandwidth measurements - only browser/cpu performance.
Things that might be interesting to measure:
Base JavaScript
DOM manipulation
DOM/CSS rendering
I'd like to do this in a way that affects the page's render-speed as little as possible.
BTW: In order to not confuse/irritate users with inconsistent behavior - this would, of course, require on-screen notifications to allow users to opt in/out of this whole performance-tuning process.
[Update: there's a related question that I missed: Disable JavaScript function based on user's computer's performance. Thanks Andrioid!]
Not to be a killjoy here, but this is not a feat that is currently possible in any meaningful way in my opinion.
There are several reasons for this, the main ones being:
Whatever measurement you do, if it is to have any meaning, will have to test the maximum potential of the browser/cpu, which you cannot do and maintain any kind of reasonable user experience
Even if you could, it would be a meaningless snapshot since you have no idea what kind of load the cpu is under from other applications than the browser while your test is running, and weather or not that situation will continue while the user is visiting your website.
Even if you could do that, every browser has their own strengths and weaknesses, which means, you'd have to test every dom manipulation function to know how fast the browser would complete it, there is no "general" or "average" that makes sense here in my experience, and even if there was, the speed with which dom manipulation commands execute, is based on the context of what is currently in the dom, which changes when you manipulate it.
The best you can do is to either
Let your users decide what they want, and enable them to easily change that decision if they regret it
or better yet
Choose to give them something that you can be reasonably sure that the greater part of your target audience will be able to enjoy.
Slightly off topic, but following this train of thought: if your users are not techleaders in their social circles (like most users in here are, but most people in the world are not) don't give them too much choice, ie. any choice that is not absolutely nescessary - they don't want it and they don't understand the technical consequences of their decision before it is too late.
A different approach, that does not need explicit benchmark, would be to progressively enable features.
You could apply features in prioritized order, and after each one, drop the rest if a certain amount of time has passed.
Ensuring that the most expensive features come last, you would present the user with a somewhat appropriate selection of features based on how speedy the browser is.
You could try timing some basic operations - have a look at Steve Souder's Episodes and Yahoo's boomerang for good ways of timing stuff browserside. However its going to be rather complicated to work out how the metrics relate to an acceptable level of performance / a rewarding user experience.
If you're going to provide a UI to let users opt in / opt out, why not just let the user choose the level of eye candy in the app vs the rendering speed?
Take a look at some of Google's (copyrighted!) benchmarks for V8:
http://v8.googlecode.com/svn/data/benchmarks/v4/regexp.js
http://v8.googlecode.com/svn/data/benchmarks/v4/splay.js
I chose a couple of the simpler ones to give an idea of similar benchmarks you could create yourself to test feature sets. As long as you keep the run-time of your tests between time logged at start to time logged at completion to less than 100 ms on the slowest systems (which these Google tests are vastly greater than) you should get the information you need without being detrimental to user experience. While the Google benchmarks care at a granularity between the faster systems, you don't. All that you need to know is which systems take longer than XX ms to complete.
Things you could test are regular expression operations (similar to the above), string concatenation, page scrolling, anything that causes a browser repaint or reflow, etc.
You could run all the benchmarks you want using Web Workers, then, according to results, store your detection about the performance of the machine in a cookie.
This is only for HTML5 Supported browsers, of-course
Some Ideas:
Putting a time-limit on the tests seems like an obvious choice.
Storing test results in a cookie also seems obvious.
Poor test performance on a test could pause further scripts
and trigger display of a non-blocking prompt UI (like the save password prompts common in modern web browsers)
that asks the user if they want to opt into further scripting effects - and store the answer in a cookie.
while the user hasn't answered the prompt, then periodically repeat the tests and auto-accept the scripting prompt if consecutive tests finish faster than the first one.
.
On a sidenote - Slow network speeds could also probably be tested
by timing the download of external resources (like the pages own CSS or JavaScript files)
and comparing that result with the JavaScript benchmark results.
this may be useful on sites relying on loads of XHR effects and/or heavy use of <img/>s.
.
It seems that DOM rendering/manipulation benchmarks are difficult to perform before the page has started to render - and are thus likely to cause quite noticable delays for all users.
I came with a similar question and I solved it this way, in fact it helped me taking some decisions.
After rendering the page I do:
let now, finishTime, i = 0;
now = Date.now();//Returns the number of miliseconds after Jan 01 1970
finishTime = now + 200; //We add 200ms (1/5 of a second)
while(now < finishTime){
i++;
now = Date.now();
}
console.log("I looped " + i + " times!!!");
After doing that I tested it on several browser with different OS and the i value gave me the following results:
Windows 10 - 8GB RAM:
1,500,000 aprox on Chrome
301,327 aprox on Internet Explorer
141,280 (on Firefox on a VirtualMachine running Lubuntu 2GB given)
MacOS 8GB RAM:
3,000,000 aprox on Safari
1,500,000 aprox on Firefox
70,000 (on Firefox 41 on a Virtual Machine running Windows XP 2GB given)
Windows 10 - 4GB RAM (This is an Old computer I have)
500,000 aprox on Google Chrome
I load a lot of divs in a form of list, the are loaded dinamically accordeing to user's input, this helped me to limit the number of elements I create according to the performance the have given, BUT
But the JS is not all!, because even tough the Lubuntu OS running on a virtual machine gave poor results, it loaded 20,000 div elements in less than 2 seconds and you could scroll through the list with no problem while I took more than 12 seconds for IE and the performance sucked!
So a Good way could be that, but When it comes to rendering, thats another story, but this definitely could help to take some decisions.
Good luck, everyone!

How do the various Javascript optimization projects affect DOM performance?

There's a lot of capital C, capital S computer science going into Javascript via the Tracemonkey, Squirrelfish, and V8 projects. Do any of these projects (or others) address the performance of DOM operations, or are they purely Javascript computation related?
The performance of pure DOM operations (getElementById/Tagname/Selector, nextChild, etc) are unaffected as they're already in pure C++.
How the JS engine improvements will effect performance does depend to an extent on the particular techniques used for the performance improvements, as well as the performance of the DOM->JS bridge.
An example of the former is TraceMonkey's dependence on all calls being to JS functions. Because a trace effectively inlines the path of execution any point where the JS hits code that cannot be inlined (native code, true polymorphic recursion, exception handlers) the trace is aborted and execution falls back to the interpreter. The TM developers are doing quite a lot of work to improve the amount of code that can be traced (including handling polymorphic recursion) however realistically tracing across calls to arbitrary native functions (eg. the DOM) isn't feasible. For that reason I believe they are looking at implementing more of the DOM in JS (or at least in a JS friendly manner). That said, when code is traceable TM can do an exceptionally good job as it can lower most "objects" to more efficient and/or native equivalents (eg. use machine ints instead of the JS Number implementation).
JavaScriptCore (which is where SquirrelFish Extreme lives) and V8 have a more similar approach in that they both JIT all JS code immediately and produce code that is more speculative (eg. if you are doing a*b they generate code that assumes a and b are numbers and falls back to exceptionally slow code if they aren't). This has a number of benefits over tracing, namely that you can jit all code, regardless as to whether or not it calls native code/throws exceptions, etc, which means a single DOM call won't destroy performance. The downside is that all code is speculative -- TM will inline calls to Math.floor, etc, but the best JSC/V8 can do would be equivalent to a=Math.floor(0.5) -> a=(Math.floor == realFloor) ? inline : Math.floor(0.5) this has costs both in performance and memory usage, it also isn't particularly feasible. The reason for this is the up front compilation, whereas TM only JITs code after it's run (and so knows exactly what function was called) JSC and V8 have no real basis to make such an assumption and basically have to guess (and currently neither attempts this). The one thing that V8 and JSC do to try and compensate for this problem is to track what they've seen in the past and incorporate that into the path of execution, both use a combination of techniques to do this caching, in especially hot cases they rewrite small portions of the instruction stream, and in other cases they keep out of band caches. Broadly speaking if you have code that goes
a.x * a.y
V8 and JSC will check the 'implicit type'/'Structure' twice -- once for each access, and then check that a.x and a.y are both numbers, whereas TM will generate code that checks the type of a only once, and can (all things being equal) just multiply a.x and a.y without checking that they're numbers.
If you're looking at pure execution speed currently there's something of a mixed bag as each engine does appear to do better at certain tasks than others -- TraceMonkey wins in many pure maths tests, V8 wins in heavily dynamic cases, JSC wins if there's a mix. Of course while that's true today it may not be tomorrow as we're all working hard to improve performance.
The other issue i mentioned was the DOM<->JS binding cost -- this can actually play a very significant part in web performance, the best example of this is Safari 3.1/2 vs Chrome at the Dromaeo benchmark. Chrome is based off of the Safari 3.1/2 branch of WebKit so it's reasonably safe to assume similar DOM performance (compiler difference could cause some degree of variance). In this benchmark Safari 3.1/2 actually beats Chrome despite having a JS engine that is clearly much much slower, this is basically due to more efficient bindings between JSC/WebCore (the dom/rendering/etc of WebKit) and V8/WebCore
Currently looking at TM's DOM bindings seems unfair as they haven't completed all the work they want to do (alas) so they just fall back on the interpreter :-(
..
Errmmm, that went on somewhat longer than intended, so short answer to the original question is "it depends" :D
They're pure JavaScript. Unless a particular DOM method call is implemented in JS, they'll have little effect (not to say there hasn't been work done on reducing the overhead of such calls however).
DOM optimization is a whole 'nother kettle of squirrels monkeys spiders fish... The layout and even rendering engines come into play, and each browser has their own implementation and optimization strategy.

Does Silverlight have a performance advantage over JavaScript?

At a recent discussion on Silverlight the advantage of speed was brought up. The argument for Silverlight was that it performed better in the browser than Javascript because it is compiled (and managed) code.
It was then stated that this advantage only applies to IE because IE interprets Javascript which is inefficient when compared to that of other browsers such as Chrome and FireFox which compile Javascript to machine code before execution and as such perform as well as Silverlight.
Does anybody have a definitive answer to this performance question. i.e. Do/will Silverlight and Javascript have comparable performance on Chrome and Firefox?
Speculating is fun. Or we could actually try a test or two...
That Silverlight vs. Javascript chess sample has been updated for Silverlight 2. When I run it, C# averages 420,000 nodes per second vs. Javascript at 23,000 nodes per second. I'm running the dev branch of Google Chrome (v. 0.4.154.25). That's still almost an 18x speed advantage for Silverlight.
Primes calculation shows a 3x advantage for Silverlight: calculating 1,000,000 primes in Javascript takes 3.7 seconds, in Silverlight takes 1.2 seconds.
So I think that for calculation, there's still a pretty strong advantage for Silverlight, and my gut feel is that it's likely to stay that way. Both sides will continue to optimize, but there are some limits to what you can optimize in a dynamic language.
Silverlight doesn't (yet) have an advantage when it comes to animation. For instance, the Bubblemark test shows Javascript running at 170 fps, and Silverlight running at 100 fps. I think we can expect to see that change when Silverlight 3 comes out, since it will include GPU support.
Javascript is ran in a virtual machine by most browsers. However, Javascript is still a funky language, and even a "fast" virtual machine like V8 is incredibly slow by modern standards.
I'd expect the CLR to be faster.
I'd say that architecturally, it's a wash.
On the one hand Silverlight is MSIL code, which is reasonably fast compared to raw, optimized native code but still runs slower due to the VM (CLR) overhead and will still have slow initial load times when being ngen'd.
On the other hand the speed of Javascript is much less reliable due to the huge variations in Javascript engines which have an order of magnitude, or more, range in performance. You have slow interpreters like IE, though IE8 is speeding things up, and then you have faster compilers/interpreters like SpiderMonkey and V8 which have only recently begun to explore the performance limits of Javascript. There's also new technologies in the R&D phase like TraceMonkey which have tremendous potential to vastly improve Javascript performance (getting close to native code speeds). Javascript does have the inherent disadvantage that it is single-threaded, but given the difficulty of writing good threaded code it's hard to say how much difference that makes.
At the end of the day when comparing apples to apples the real performance bottleneck is the DOM, and there it doesn't much matter what technology you're using to manipulate it.
I don't understand why you're trying to compare a scripting language with a browser plug-in.
They don't do the same thing. The former interacts with the DOM while the latter runs multimedia apps inside the browser.
Comparing Flash and Silverlight from a performance point of view would seem more useful to me.
EDIT: After some research I found out that you can interact with the DOM in Silverlight. I don't think it can be seen as a good Javascript replacement though, performance concerns aside, unless you have some heavy client-side interaction needed. I see two main disavantadges :
1) You will force your users to download a Silverlight app instead of relying on a relatively small .js file.
2) Your users are required to install Silverlight before using your page.
From the cursory testing I've done, Silverlight runs faster.
Here are some intersting results I gathered from http://bubblemark.com/
In general, Silverlight was much faster, but Chrome's javascript implementation tore everyone else to bits!
Keep in mind, this is only on one machine, one os (XP) etc. you would need to do much more extensive tests to achieve more.
I'd say yes, since it has .NET's CLR. At that, with resent developments in JavaScript implemented in Google Chrome and in the yet to be fully released Firefox 3.1, one may want do do some benchmarking of their own; I don't know of any comparisons as yet. (anyone?)
Nonetheless, in my opinion, .NET should be generally faster than Javascript, and as has been noted before, this will not speed up the network. Consequently for complex algorithms, SilverLight will be faster, but for network requests, you may not have any noticeable difference.
On the performance question, you may want to have a look at Flash 10 which can allow c/c++ code using "Alchemy". This may be a more portable solution than SilverLight.
It looks like that Chrome's javascript implementation is faster than Silverlight
Platforms should be considered here. How Silverlight performs in Lnux or Solaris or Mac is really big question !
How abt HTML5 .I think while comparing the performance of javascript HTMl plays a serious role. So we shoul definitely compare the performance using HTML5 + javascript and Silverlight .
Sre, if you're using "Internet Exploder" it probably will...
If you're using V8 (Chrome) or the upcoming Safari and FireFox, I seriously doubt it ;)
I would love to see that Chess thn BTW where IE is playing using Silverlight and Chrome is using Javascript. THAT would rock MSFT...!! ;)

Categories

Resources