Recently i have played a lot with Javascript(Chrome) there are some things that came to my mind.
V8 has a JIT which make code running faster.
Functional programming means you write logic into functions and invoke/combine them by chain, means core functions will be invoked frequently(not its real definition just for get my point).
JIT is one best practice of exchange time with space basically in first time cache machine code of high-level functions and run caches in next time.
So may i say that apps will be faster if write code in FP way and run by VM that has JIT feature.
A good read on this subject is here: http://thibaultlaurens.github.io/javascript/2013/04/29/how-the-v8-engine-works/
Particularly the section that talks about how V8 compiles and injects JIT code
How V8 compiles JavaScript code?
V8 has two compilers!
A “Full” Compiler that can generate good code for any JavaScript: good
but not great JIT code. The goal of this compiler is to generates code
quickly. To achieve its goal, it doesn’t do any type analysis and
doesn”t know anything about types. Instead, it uses an Inline Caches
or “IC” strategy to refine knowledge about types while the program
run. IC is very efficient and bring about 20 times speed improvement.
An Optimizing Compiler that produces great code for most of the
JavaScript language. It comes later and re-compiles hot functions. The
optimizing compiler takes types from the Inline Cache and make
decisions about how to optimize the code better. However, some
language features are not supported yet like try/catch block for
instance. (The workaround for try/catch blocks is to write the “non
stable” code into a function and to call the function in the try
block)
In short, your fastest code is that which does not modify objects or prototypical function definitions after they've been defined
Related
I wish to develop a JavaScript game engine that uses C++ as a back-end for rendering/updates/collision etc. Pretty much all the heavy lifting stuff.
There would then be C++ classes/functions that are exposed through modifying the isolate variable (or maybe just a native nodejs module). Some of these classes, like the Sprite class, could have its update function overridden by a JS subclass in order to allow users to customize the behavior.
Finally, the game engine would run in a loop within the JavaScript, but every frame would make a call to the C++ context to update/render and all the stuff PLUS there would be tons of calls to check input, collision, etc. Not to mention all the callbacks each subclass would make to the parent classes written in C++.
My concern is that I have read there is significant overhead (more than normal) when calling C++ from the JS context (be it ffi or native modules). Usually it's worth it for the performance, but considering how many calls would be made back and forth between the two languages each frame, perhaps this wouldn't be the best idea? Instead, maybe something like Python would be more appropriate due to its zero overhead (though Python in general is much slower), or a different JS interpreter all together?
This answer is going to be very subjective, it's from observations from my experience that I wouldn't say are very rigorous, I'm working through this issue now myself, and i have not verified my claims with benchmarks. That said...
Yes, calling from JS to C++ is relatively expensive. Certainly more so than calls within pure JS. Substantially more so, in fact, than calls in the other direction, from C++ to JS. I assume that a major cause of the inefficiency is that the javascript engine loses some optimization opportunities.
However, assuming you stick with the V8 engine, calls from JS to C++ will be much faster than calling out into any other language.
I'm looking for a way to protect some javascript code from reading/modifying. I know many people consider that impossible, but still...
From what I see the Chrome's V8 engine does a number of optimizations when it sees JS code, probably compiles it (?) and then runs it.
So I'm wondering is it possible to use V8's C++ api to compile the JS code into machinecode/chromecode and then feed that directly into Chrome (I don't care about other browsers)?
Supposedly it will not only be faster, but also non-humanly readable, something like ASM.
Is this possible?
WebAssembly is doing this thing so I don't understand why we can't do it with JS code.
There's also EncloseJS and pkg that are doing a very similar thing.
V8 developer here. No, it is not possible to compile JavaScript ahead of time and send only the compiled code to the browser. V8 (and other virtual machines like it) contain compilers, but they cannot be used as standalone compilers to produce standalone binaries.
In theory, you could compile JavaScript to WebAssembly -- any two turing-complete programming languages can in theory be compiled to each other. As far as I know, no such compiler exists today though. One big reason for that is that performance of the end result would be horrible (see the discussion with Andreas Rossberg for details); so considering that browsers can execute JavaScript directly, people have little reason to develop such a thing. (It would also be a large and difficult task.)
As for your stated goal: your best shot at making JavaScript code unreadable is to minify it. In fact, that is effectively just as good as your idea to generate assembly, because disassemblers exist that turn assembly back into minified-like higher-level language code; they cannot reconstruct variable names or comments (because that information is lost during compilation), but they can reconstruct program logic.
What I ended up doing is moving some of the logic from JavaScript into C++ and compiling that into NodeJS native modules (that's possible for Electron apps).
It works pretty good, it's very fast, the source is... as protected as it can get, but you may need to worry about cross-platform issues, also compiling/linking can be a bit of a pain, but other than that it's great.
WebAssembly is not doing that. And no, it's not possible either. The web is supposed to be both browser- and hardware-independent.
Moreover, a language like JS would not be faster if compiled offline -- it only is anything close to fast because it is dynamically compiled and optimised, taking dynamic profile information into account.
I want to write a program that scans javascript code, and replace variable names with short, meaningless identifiers, without breaking the code.
I know YUI compresser and google's closure compiler can do this. I am wondering how can I implement this?
Is it necessary to build the abstract syntax tree? If not, how can I find the candidate variables for renaming?
Most modern javascript compressors are actually compilers. They parse javascript input into an abstract syntax tree, perform operations on the tree (some safe, some not) and then use the syntax tree to print out code. Both Uglify and Closure-Compiler are true compilers.
Implementing your own compiler is a large project and requires a great knowledge of computing theory. The dragon book is a great resource from which to get started.
You may be able to leverage existing work. I recommend starting from a non-optimizing compiler for reference.
I made http://www.whak.ca for whacking scripts to unreadable obfuscation, over 75 algorithms ready for you to obfuscate your code. There are also 20 compression JavaScript packers on http://www.whak.ca/packer/ that will obfuscate your codes. These all can be reversed engineered if someone wanted your code bad enough. But people can pick locks, yet we still lock our doors...
Before using any library, framework etc. in JavaScript, I am interested in it mechanism.
I would like to find out how CoffeeScript works.
My assumption:
1st step: compiler gets string from:
<script type="text/coffeescript"></script>
2nd: it creates a js-code like a string:
it = "test" -> "var it = 'test';"
and the last step compiler uses eval() to implement a code.
P.S.:
Why does it become popular?
It has an influence upon performance, after all we spend a lot of time to execute .coffee files.
The usual approach is to compile CoffeeScript code server-side, and then link JavaScript in your HTML files. This is normally performed using the coffeee command line utility, but you can also find build systems that take care of it for you, such as Grunt, Brunch, etc. You can also write Makefiles or simple shell scripts to take care of this for you.
When using some of the build systems and the coffee tool, you have an option of having the tools monitor your CoffeeScript sources and recompile as soon as you save them. This can be quite handy. Look at the 'watch' feature in the documentation.
My guess for CoffeeScript popularity is that it gives you an arguably nicer syntax. Personally, I find the greatest merit of CoffeeScript is added syntactic sugar like list comprehensions and the fact it treats everything as an expression (e.g., ability to return for loops or if-else blocks from functions). You will also find languages that take this idea even further, like Coco, LiveScript.
One thing to note is that CoffeeScript is not an interpreted language. It's transpiled (compiled into another language) and then executed by the target runtime (JavaScript engine). Because of this, it has the same performance characteristics as equivalent JavaScript code. Whether you can manually write more performant code is another issue. You probably can. At any rate, it's a bit silly to talk about CoffeeScript 'performance'. As for the performance of compiled CoffeeScript, with good knowledge of JavaScript, you can probably optimize here and there, but I haven't had a need to do it ever.
The usual way of using Coffee Script is to compile in as a step in your build process. So you would write Coffee Script, then compile it to plain Javascript, and then use that in your web app.
This carries no runtime cost, because browser will load only Javascript, i.e.:
<script type="text/javascript" src="compiled-js-file.js"></script>
You can use coffee command directly to compile, or some more fancy build systems such as Gulp or Grunt.
You can see Coffe Script website to see what features it has (this is what attracts developers). Most useful, in my opinion, are:
array syntax for functions (JS will support it in ES6)
existential operator to protect you from null value errors
classes (also present in ES6)
destructuring assignment
Also, this means then when you are debugging such a webapp, you will not be able to see where exactly the error is (or setup break points in coffee sources). Fortunately, we have source maps for exactly that problem.
I seem to recall an online script that refactors JavaScript for the purpose of optimization (i.e, make it run faster).
I am not asking for a link nor information pertaining 'minifying' the code (and in broader terms, I am not talking about the load time for JavaScript). I am asking if there is a script that optimizes a JavaScript program.
I am under the impression that good C compilers optimize code, so it seems that some methodology would have came to be for optimizing JavaScript over the years. Is there such a service? And does such a service exist that is similar to 'minify' in the sense that it is an online service that you feed your JavaScript code in - and it spits out the optimized version?
No there are no javascript programs that optimize the code in sense of algorithmic optimize. Unlike compilers that do complex calculation base on information theory (they eliminates unnecessary loops, removes not used variables.....a lot of things) for optimizing the final machine code, Javascript is an interpreted(on the fly) language, and it's optimization is done by the javascript engine of the browser. Actually Google Chrome javascript engine seems to be the fast.