Is it possible to accelerate the webgl matrix Multiplication through pnacl? - javascript

The poor performance of matrix multiplication in javascript is an obstacle for high performance webgl. So I am thinking about using pnacl to accelerate it.
Ideally, I'd like to pass the ArrayBuffer(Float32Array) and the matrix to pnacl, then use the native code to finish the multiplication and update the value in the buffer, at last notify the page(javascript).
But I am doubt if the buffer memory can be shared for pnacl and page javascript?
If not, I have to pass the buffer back to client, I am not sure the influence of such operation to performance.
Any suggestion will be appreciated!

PPAPI passes the ArrayBuffer using shared memory, so copying will be minimal.
https://code.google.com/p/chromium/codesearch#chromium/src/ppapi/proxy/plugin_array_buffer_var.h
However, PNaCl plugins run in a different (plugin) process in Chrome, so latency (time to send the message to the plugin and receive an answer) may negate any performance improvement from native code.

As with all optimization questions, you should profile your code to see if the matrix multiplication is even an issue. If it is, bbudge is right, you'll likely lose any performance gains by having to pass the array to PNaCl and back to JavaScript.
asm.js code runs in the same process and stack as JavaScript, so you may see benefits by using it. Take a look at http://jsperf.com/matrix-multiplication-with-asm-js/6. Unfortunately, there are no guarantees that asm.js will be performant on all browsers. If a browser doesn't support asm.js directly, it will be executed as plain JavaScript, which may end up being slower.
When WebAssembly is available, that will likely be your best bet.

Related

Is JavaScript standby functionality stored in RAM or the hard drive?

This is a bit of a strange question where I do not know enough to possibly ask it correctly, so I will do my best (googling for a worthwhile result has proven difficult):
You write a Javascript program
V8 (or other interpreters) compiles your script (I understand WHICH interpreter is running vastly changes the results of the answer to this, so let's stick with V8)
Your Javascript could be a formidably large footprint of executable code
Does V8 keep any routines that are not in use on the hard drive? Or does the Javascript interpreted commands stay completely in RAM?
I was wondering this because it would seem unfortunate for a massive JS program to eat into the available RAM allotment a browser gives if the complexity of the JS program was overtly large.
I know this gets into: if you have such a huge program you're doing it wrong, but I like to push things where I can and if I get a better understanding of how this all works, I can make better decisions :)
(V8 developer here.) Short answer: no, V8 does not swap any unused things (code or otherwise) to disk at runtime.
Executable code is typically not the biggest consumer of memory we see in V8, it tends to be dwarfed by (non-code) data. Still, the amount of code can certainly be significant; one of the reasons why V8 switched its first (unoptimized) execution tier from a compiler to an interpreter a few years ago was because that interpreter's bytecode is much smaller than the earlier non-optimizing compiler's machine code it replaced. When a function is optimized, it's still compiled to machine code; but since typically only relatively few functions get optimized, that usually only takes up a low single-digit percentage of overall memory.
In embedders that support it (like Chrome), V8 does support caching certain things between runs, including code. That's a performance optimization: some work can be avoided if you visit the same website multiple times. This mechanism doesn't temporarily free up any memory, and it's not supposed to.
Generally, it's not the job of individual applications to swap things to disk -- it's the job of the operating system. When all running applications combined use more memory than is available, then the kernel will pick some "pages" (chunks of memory) and write them to disk. Applications don't notice any of that (except a massive loss of performance when those pages are needed again) and don't have to do any work to support it. So this part of the answer applies not just to V8, but also to other JavaScript engines, and in general to all other programs you may run on your computer.

Are functions available in browser WebGL and node.js server's node-WebGL same?

Currently I am trying to convert browser based client side volume rendering code to server side pure javascript based rendering. I use node-webgl at server side.
I use an open-source WebGL based browser implementation. My question is, are the functions of browser based WebGL same as node.js node-WebGL functions? Is there a need to change in the code if I am using at server (apart from the browser interactions). Functions like initiation of shaders, cube buffers, initialization of frame buffer objects etc. Will they change?
My whole project is based on this assumption that it works, and currently I am facing some errors, so I wanted to ask am I doing the right thing?
Regards,
Prajwal
Reading the docs node-webgl is not really compatible with actual WebGL
WebGL is based on OpenGL ES, a restriction of OpenGL found on desktops, for embedded systems. Because this module wraps OpenGL, it is possible to do things that may not work on web browsers
What it doesn't say and should is there are also things WebGL does that will not work on DesktopGL.
There are tons of work arounds in real WebGL implementations to work around those differences. Shaders on all WebGL implementations are re-written but looking at the implementation of node-webgl they aren't re-writing the shaders therefore they can't be working around the differences.
As one example there are words reserved in OpenGL GLSL that are not reserved in WebGL. WebGL implementations work around that. node-webgl will not.
On top of which there will be missing functions. For example WebGL has versions of texImage2D and texSubImage2D that take an HTMLImageElement, or a HTMLCanvasElement or an HTMLVideoElement but those elements do not exist in node.js
Another is the whole interaction with depth and stencil buffer formats for renderbuffers
Another there's no support for the various pixelStorei additions in WebGL
There are many many other similar issues.
Security
The biggest issue is WebGL is designed to be secure whereas OpenGL is not. One of the major goals of WebGL is security because an arbitrary web page is allowed to run GPU code on your machine. WebGL takes security extremely seriously which is why it took a couple of years from initial concept (just call OpenGL) to actually shipping WebGL live in browsers. It's also why many drivers are blacklisted and yet another reason shaders are re-written.
For example shaders are re-written to make sure the shaders met certain requirements and don't pass certain limits before being passed on to the driver. Identifiers are checked that they are not too long. They are all replaced by temporary identifiers to make sure there's no strange interactions. Field and array expressions are checked they are not too complex. Array index clamping instructs are added. Unicode is stripped (OpenGL shaders only support ASCII). Shader features that need to be enabled/disabled are. And many other things.
Another example is checking that all buffers and textures point to valid memory and that all data that will be accessed by a shader is accounted for. Memory that is allocated is cleared. Otherwise you can possibly use the driver to spy on all of both CPU and GPU memory.
WebGL guards against all these cases.
node-webgl on the other hand is just calling directly into the OpenGL driver leaving with no regards to security. If you pass user data through node-webgl you may be opening your server to severe security issues. Even if you don't pass user data you may accidentally allow reading uninitialized data from uncleared buffers and textures.
Arguably they should have named it node-opengl since it's not really WebGL in any way shape or form. To be WebGL, at a minimum, they would need to pass the WebGL conformance tests to claim to be WebGL compatible.
Yes, the functions are the same as node-webgl is a WebGL implementation but… The OpenGL driver in the server could be very different from what you usually have in the clients. It's very possible that the server doesn't have an OpenGL-enabled graphics card, or even a graphics card at all. That could be the reason you're getting errors. You should try posting those errors so we have more information.
Also, you could try running the node-webgl tests (at https://github.com/mikeseven/node-webgl/tree/master/test) to see if your server can run them correctly.

JavaScript: figuring out max memory that could be used in a program

JavaScript in Chrome (or any other browser for that matter, but I rather limit the discussion to Chrome to make it simpler) does not provide an API which can be used to observe memory related information (e.g. how much memory is being used by the current tab where the JS is running).
I am looking for a creative solution for getting an estimation of how much bytes I can cache in a JavaScript object that my web page is running. The problem definition is that I would like to cache as much as possible.
Can anyone think of a decent way of estimating how much memory can a tab handle before it will crash / become unusable on a machine? I guess a statistical approach could work out fine for some cases, but I'm looking for something more dynamic.

How do the various Javascript optimization projects affect DOM performance?

There's a lot of capital C, capital S computer science going into Javascript via the Tracemonkey, Squirrelfish, and V8 projects. Do any of these projects (or others) address the performance of DOM operations, or are they purely Javascript computation related?
The performance of pure DOM operations (getElementById/Tagname/Selector, nextChild, etc) are unaffected as they're already in pure C++.
How the JS engine improvements will effect performance does depend to an extent on the particular techniques used for the performance improvements, as well as the performance of the DOM->JS bridge.
An example of the former is TraceMonkey's dependence on all calls being to JS functions. Because a trace effectively inlines the path of execution any point where the JS hits code that cannot be inlined (native code, true polymorphic recursion, exception handlers) the trace is aborted and execution falls back to the interpreter. The TM developers are doing quite a lot of work to improve the amount of code that can be traced (including handling polymorphic recursion) however realistically tracing across calls to arbitrary native functions (eg. the DOM) isn't feasible. For that reason I believe they are looking at implementing more of the DOM in JS (or at least in a JS friendly manner). That said, when code is traceable TM can do an exceptionally good job as it can lower most "objects" to more efficient and/or native equivalents (eg. use machine ints instead of the JS Number implementation).
JavaScriptCore (which is where SquirrelFish Extreme lives) and V8 have a more similar approach in that they both JIT all JS code immediately and produce code that is more speculative (eg. if you are doing a*b they generate code that assumes a and b are numbers and falls back to exceptionally slow code if they aren't). This has a number of benefits over tracing, namely that you can jit all code, regardless as to whether or not it calls native code/throws exceptions, etc, which means a single DOM call won't destroy performance. The downside is that all code is speculative -- TM will inline calls to Math.floor, etc, but the best JSC/V8 can do would be equivalent to a=Math.floor(0.5) -> a=(Math.floor == realFloor) ? inline : Math.floor(0.5) this has costs both in performance and memory usage, it also isn't particularly feasible. The reason for this is the up front compilation, whereas TM only JITs code after it's run (and so knows exactly what function was called) JSC and V8 have no real basis to make such an assumption and basically have to guess (and currently neither attempts this). The one thing that V8 and JSC do to try and compensate for this problem is to track what they've seen in the past and incorporate that into the path of execution, both use a combination of techniques to do this caching, in especially hot cases they rewrite small portions of the instruction stream, and in other cases they keep out of band caches. Broadly speaking if you have code that goes
a.x * a.y
V8 and JSC will check the 'implicit type'/'Structure' twice -- once for each access, and then check that a.x and a.y are both numbers, whereas TM will generate code that checks the type of a only once, and can (all things being equal) just multiply a.x and a.y without checking that they're numbers.
If you're looking at pure execution speed currently there's something of a mixed bag as each engine does appear to do better at certain tasks than others -- TraceMonkey wins in many pure maths tests, V8 wins in heavily dynamic cases, JSC wins if there's a mix. Of course while that's true today it may not be tomorrow as we're all working hard to improve performance.
The other issue i mentioned was the DOM<->JS binding cost -- this can actually play a very significant part in web performance, the best example of this is Safari 3.1/2 vs Chrome at the Dromaeo benchmark. Chrome is based off of the Safari 3.1/2 branch of WebKit so it's reasonably safe to assume similar DOM performance (compiler difference could cause some degree of variance). In this benchmark Safari 3.1/2 actually beats Chrome despite having a JS engine that is clearly much much slower, this is basically due to more efficient bindings between JSC/WebCore (the dom/rendering/etc of WebKit) and V8/WebCore
Currently looking at TM's DOM bindings seems unfair as they haven't completed all the work they want to do (alas) so they just fall back on the interpreter :-(
..
Errmmm, that went on somewhat longer than intended, so short answer to the original question is "it depends" :D
They're pure JavaScript. Unless a particular DOM method call is implemented in JS, they'll have little effect (not to say there hasn't been work done on reducing the overhead of such calls however).
DOM optimization is a whole 'nother kettle of squirrels monkeys spiders fish... The layout and even rendering engines come into play, and each browser has their own implementation and optimization strategy.

Does Silverlight have a performance advantage over JavaScript?

At a recent discussion on Silverlight the advantage of speed was brought up. The argument for Silverlight was that it performed better in the browser than Javascript because it is compiled (and managed) code.
It was then stated that this advantage only applies to IE because IE interprets Javascript which is inefficient when compared to that of other browsers such as Chrome and FireFox which compile Javascript to machine code before execution and as such perform as well as Silverlight.
Does anybody have a definitive answer to this performance question. i.e. Do/will Silverlight and Javascript have comparable performance on Chrome and Firefox?
Speculating is fun. Or we could actually try a test or two...
That Silverlight vs. Javascript chess sample has been updated for Silverlight 2. When I run it, C# averages 420,000 nodes per second vs. Javascript at 23,000 nodes per second. I'm running the dev branch of Google Chrome (v. 0.4.154.25). That's still almost an 18x speed advantage for Silverlight.
Primes calculation shows a 3x advantage for Silverlight: calculating 1,000,000 primes in Javascript takes 3.7 seconds, in Silverlight takes 1.2 seconds.
So I think that for calculation, there's still a pretty strong advantage for Silverlight, and my gut feel is that it's likely to stay that way. Both sides will continue to optimize, but there are some limits to what you can optimize in a dynamic language.
Silverlight doesn't (yet) have an advantage when it comes to animation. For instance, the Bubblemark test shows Javascript running at 170 fps, and Silverlight running at 100 fps. I think we can expect to see that change when Silverlight 3 comes out, since it will include GPU support.
Javascript is ran in a virtual machine by most browsers. However, Javascript is still a funky language, and even a "fast" virtual machine like V8 is incredibly slow by modern standards.
I'd expect the CLR to be faster.
I'd say that architecturally, it's a wash.
On the one hand Silverlight is MSIL code, which is reasonably fast compared to raw, optimized native code but still runs slower due to the VM (CLR) overhead and will still have slow initial load times when being ngen'd.
On the other hand the speed of Javascript is much less reliable due to the huge variations in Javascript engines which have an order of magnitude, or more, range in performance. You have slow interpreters like IE, though IE8 is speeding things up, and then you have faster compilers/interpreters like SpiderMonkey and V8 which have only recently begun to explore the performance limits of Javascript. There's also new technologies in the R&D phase like TraceMonkey which have tremendous potential to vastly improve Javascript performance (getting close to native code speeds). Javascript does have the inherent disadvantage that it is single-threaded, but given the difficulty of writing good threaded code it's hard to say how much difference that makes.
At the end of the day when comparing apples to apples the real performance bottleneck is the DOM, and there it doesn't much matter what technology you're using to manipulate it.
I don't understand why you're trying to compare a scripting language with a browser plug-in.
They don't do the same thing. The former interacts with the DOM while the latter runs multimedia apps inside the browser.
Comparing Flash and Silverlight from a performance point of view would seem more useful to me.
EDIT: After some research I found out that you can interact with the DOM in Silverlight. I don't think it can be seen as a good Javascript replacement though, performance concerns aside, unless you have some heavy client-side interaction needed. I see two main disavantadges :
1) You will force your users to download a Silverlight app instead of relying on a relatively small .js file.
2) Your users are required to install Silverlight before using your page.
From the cursory testing I've done, Silverlight runs faster.
Here are some intersting results I gathered from http://bubblemark.com/
In general, Silverlight was much faster, but Chrome's javascript implementation tore everyone else to bits!
Keep in mind, this is only on one machine, one os (XP) etc. you would need to do much more extensive tests to achieve more.
I'd say yes, since it has .NET's CLR. At that, with resent developments in JavaScript implemented in Google Chrome and in the yet to be fully released Firefox 3.1, one may want do do some benchmarking of their own; I don't know of any comparisons as yet. (anyone?)
Nonetheless, in my opinion, .NET should be generally faster than Javascript, and as has been noted before, this will not speed up the network. Consequently for complex algorithms, SilverLight will be faster, but for network requests, you may not have any noticeable difference.
On the performance question, you may want to have a look at Flash 10 which can allow c/c++ code using "Alchemy". This may be a more portable solution than SilverLight.
It looks like that Chrome's javascript implementation is faster than Silverlight
Platforms should be considered here. How Silverlight performs in Lnux or Solaris or Mac is really big question !
How abt HTML5 .I think while comparing the performance of javascript HTMl plays a serious role. So we shoul definitely compare the performance using HTML5 + javascript and Silverlight .
Sre, if you're using "Internet Exploder" it probably will...
If you're using V8 (Chrome) or the upcoming Safari and FireFox, I seriously doubt it ;)
I would love to see that Chess thn BTW where IE is playing using Silverlight and Chrome is using Javascript. THAT would rock MSFT...!! ;)

Categories

Resources