JS to Dart Conversion - javascript

This is a 2 part question. I am not lazy, simply not fundamentally fluent enough in JS to convert an entire library while referencing the Dart Synonyms page it seems. The Dart:js documentation explains how to access the JS global object as shown in this snippet, but if i'm not mistaken it's not what i'm looking for.
Q1: In the example snippet below, it wouldn't increase Angular's performance by utilizing Dart, correct?
var angular = context(['angular']);
var myapp = angular.module('myApp', ['ngResource','ngRoute']);
If I'm right, and I do need to convert libraries unavailable in Dart, jsparser and dart-synonym are really stumping me -- I can't find any simple documentation and when I look through the actual darts I get lost.
Dart Editor kicks an error when I try to run and build jsparser:
Unhandled exception:
'file:///C:/Work Root/Dart/jsparser-ec65c9e7467f/jsparser.dart': malformed type: line 26 pos 27: type 'Options' is not loaded
List args = new Options().arguments;
So I tried dart-synonym; it ran and built correctly, but then showed a clone of the Dart Synonyms page.
Q2: If accomplishing an automatic conversion is even possible, how do I use either of these?

Dart-synonym does not automatically convert other languages to Dart, it only provides a static synonym reference to allow manual conversion.
jsparser is meant to provide automatic conversion but the last commit is from more than a year ago. A lot has changed since then, and I doubt it will run without significant tweaks to the source. For instance, the Options class was removed a while back which is why you receive that malformed type error.
If you want to use Angular in Dart, you can use Google's own port: AngularDart

You could use a similar technique to amber-lang, particularly since Dart is essentially Smalltalk with JS syntax, while amber is Smalltalk that compiles to JS. Amber uses two base objects - STObject and JSObject, allowing ST code to call JS code and vice versa. Since amber-lang uses Pharo Smalltalk as its RI, a lib like SmaCC (a Smalltalk parser builder) could be used to generate the wrapper parsing code. It already provides such support for Java, Python, C and a number of other languages. The way JS works, you can't write, and certainly not debugm, a large or complex app. Dart is an attempt to do that the way ST does, with a strong type system and a semantic runtime equivalent to an interpreted language, with near-assembler speed, but with JS syntax, since Google has tons of traine node.js programmers on staff.
Creating a Smalltalk VM is much easier than something like the JVM, since it only includes the base Object, the code to interop with OS libs, and is itself written in Smalltalk and converted to C (or the cross platform libs to F-Script on MacOS) using SLANG (CLANG on MacOS). For that reason IBM Research did a Squeak/Pharo VM that can scale to over 1000 cores (RoarVM on GitHub). Doing that with the JVM would probably take a decade.
That Smalltalk is slow is an out of date notion (due to not being stack based, which no longer matters, and the work Sun did on JIT for Java, where the PoC was also in Smalltalk - called Strongtalk. Pharo's cogit JIT works essentially the same way - assembler code with pure interpreted semantics. I had to go away from Java due to the (lack of) speed of MSF4J microservices, which were themselves the fastest available in Java, and faster than anything in JS. I can run 256 microservices in Pharo ST with a quicker startup time, less memory use, better throughput and monitoring / management, than one express.js microservice.
Porting the 32 bit VM to a 64 bit UltraSparc was quite easy, and resulted in software that can filter and route large quantities of monitoring data significantly faster than a Cisco's offering - an IOS program running on a Cisco ASR-9010. The Sun/Oracle T5220's go used for about 1/600th the price of the ASR, which is a signicant advantage.
I like Dart, but I have to say to some degree for me it's just YAPL, since it doesn't do anything not already possible with a combination of PHaro and amber-lang. And the Smalltalk syntax (Ruby is similar) is both more readable and less verbose than JS (or Java for that matter). GO had some good ideas, but not enough to really generate much interest. ST has had 36 years of development, nothing brand new is going to offer equivalent tools or equivalent runtime stability.
Check out a4bp for an example of data analysis and visualization in Pharo. The website is also written in Pharo using Graphviz from within Smalltalk. SmallTalkHub is a combination of Pharo ST and amber-lang. Amber-lang could be used to wrap libs like Angular, until it becomes easy enough to write browser plugins for any arbitrary language and we aren't stuck with JS.

Related

Coverage-based fuzzing of JavaScript applications

The American Fuzzy Lop, and the conceptually related LLVM libfuzzer not only generate random fuzzy strings, but they also watch branch coverage of the code under test and use genetic algorithms to try to cover as many branches as possible. This increases the hit frequency of the more interesting code further downstream as otherwise most of the generated inputs will be stopped early in some deserialization or validation.
But those tools work at native code level, which is not useful for JavaScript applications as it would be trying to cover the interpreter, but not really the interpreted code.
So is there a way to fuzz JavaScript (preferably in browser, but tests running in node.js would help too) with coverage guidance?
I looked at the tools mentioned in this old question, but those that do javascript don't seem to mention anything about coverage profiling. And while radamsa mentions optionally pairing it with coverage analsysis, I haven't found any documentation on how to actually do it.
How can one fuzz-test java-script (in browser) application with coverage guidance?
Fuzzing a JavaScript engine draws a lot of attention as the number of browser users is about 4 Billion. Several works have been done to find bugs in JS engines, including popular large engines, e.g, v8, webkit, chakracore, gecko, or some small embedded engines, like jerryscript, QuickJS, jsish, mjs, mujs.
It is really difficult to find bugs using AFL as the mutation mechanisms provided by AFL is not practical for JS files, e.g, bitflip can hardly be a valid mutation. Since JS is a structured language, several works using ECMAScript grammar to mutate/generate JS files(seeds):
LangFuzz parses sample JS files and splits them into code fragments. It then recombines the fragments to produce test cases.
jsfunfuzz randomly generates syntactically valid JS statements from JS grammar manually written for fuzzing.
Dharma is a generation-based, context-free grammar fuzzer, generating files based on given grammar.
Superion extends AFL using tree-based mutation guided by JS grammar.
The above works can easily pass the syntax checks but fail at semantic checks. A lot of generated JS seeds are semantically invalid.
CodeAlchemist uses a semantics-aware approach to generate code segments based on a static type analysis.
There are two levels of bugs related to JS engines: simple parser/interpreter bugs and deep inside logic bugs. Recently, there is a trend that the number of simple bugs decreases while more and more deep bugs come out.
DIE uses aspect-preserving mutation to preserves the desirable properties of CVEs. It also using type analysis to generate semantic-valid bugs.
Some works focus on mutating intermediate representations.
Fuzzilli is a coverage-guided fuzzer based on mutation on the IR level. The mutations on IR can guarantee semantic validity and can be transferred to JS.
Fuzzing JS is an interesting and hot topic according to the top conference of security/SE in recent years. Hope this information is helpful.

Can regular JavaScript be converted to asm.js, or is it only to speed up statically-typed low-level languages?

I have read the question How to test and develop with asm.js?, and the accepted answer gives a link to http://kripken.github.com/mloc_emscripten_talk/#/.
The conclusion of that slide show is that "Statically-typed languages and especially C/C++ can be compiled effectively to JavaScript", so we can "expect the speed of compiled C/C++ to get to just 2X slower than native code, or better, later this year".
But what about non-statically-typed languages, such as regular JavaScript itself? Can it be compiled to asm.js?
Can JavaScript itself be compiled to asm.js?
Not really, because of its dynamic nature. It's the same problem as when trying to compile it to C or even to native code - you actually would need to ship a VM with it to take care of those non-static aspects. At least, such a VM is possible:
js.js is a JavaScript interpreter in JavaScript. Instead of trying to create an interpreter from scratch, SpiderMonkey is compiled into LLVM and then emscripten translates the output into JavaScript.
But if asmjs code runs faster than regular JS, then it makes sense to compile JS to asmjs, no?
No. asm.js is a quite restricted subset of JS that can be easily translated to bytecode. Yet you first would need to break down all the advanced features of JS to that subset for getting this advantage - a quite complicated task imo. But JavaScript engines are designed and optimized to translate all those advanced features directly into bytecode - so why bother about an intermediate step like asm.js? Js.js claims to be around 200 times slower than "native" JS.
And what about non-statically-typed languages in general?
The slideshow talks about that from …Just C/C++? onwards. Specifically:
Dynamic Languages
Entire C/C++ runtimes can be compiled and the original language
interpreted with proper semantics, but this is not lightweight
Source-to-source compilers from such languages to JavaScript ignore
semantic differences (for example, numeric types)
Actually, these languages depend on special VMs to be efficient
Source-to-source compilers for them lose out on the optimizations done in those VMs
In response to the general question "is it possible?" then the answer is that sure, both JavaScript and the asm.js subset are Turing complete so a translation exists.
Whether one should do this and expect a performance benefit is a different question. The short answer is "no, you shouldn't." I liken this to trying to compress a compressed file; yes, it is possible to run the compression algorithm, but in general you should not expect the resulting file to be smaller.
The short answer: The performance cost of dynamically-typed languages comes from the meaning of the code; a statically-typed program with an equivalent meaning would carry the same costs.
To understand this, it is important to understand why asm.js offers a performance benefit at all; or, more generally, why statically-typed languages perform better than dynamically-typed ones. The short answer is "run-time type checking takes time," and a longer answer would include the improved feasibility of optimizing statically-typed code. For example:
function a(x) { return x + 1; }
function b(x) { return x - 1; }
function c(x, y) { return a(x) + b(y); }
If x and y are both known to be integers, I can optimize function c to a couple of machine code instructions. If they could be integers or strings, the optimization problem becomes much harder; I have to treat these as string appends in some cases, and addition in other cases. In particular, there are four possible interpretations of the addition operation that occurs in c; it could be addition, or string append, or two different variants of coerce-to-string-and-append. As you add more possible types, the number of possible permutations grows; in the worst case for a dynamically-typed language, you have k^n possible interpretations of an expression involving n terms which could each have any number of k types. In a statically typed language, k=1, so there is always 1 interpretation of any given expression. Because of this, optimizers are fundamentally more efficient at optimizing statically-typed code than dynamically-typed code: There are fewer permutations to consider when searching for opportunities to optimize.
The point here is that when converting from dynamically-typed code to statically-typed code (as you'd be doing when going from JavaScript to asm.js), you have to account for the semantics of the original code. Meaning the type-checking still occurs (it's just now been spelled out statically-typed code) and all those permutations are still present to stifle the compiler.
A few facts about asm.js, which hopefully make the concept clear:
Yes you can write the asm.js dialect by hand.
If you did look at the examples for asm.js, they are very far from being user friendly. Obviously Javascript is not the front end language for creating this code.
Translating vanilla Javascript to asm.js dialect is not possible.
Think about it - if you already could translate standard Javascript in a fully statically manner, why would there be a need for asm.js? The sole existance of asm.js means that the Javascript JIT people at some people gave up on their promise that Javascript will get faster without any effort from the developer.
There are several reasons for this, but let's just say it would be really hard for the JIT to understand a dynamic language as good as a static compiler. And then probably for the developers to fully understand the JIT.
In the end it boils down to using the right tool for the task. If you want static, very performant code, use C / C++ ( / Java ) - if you want a dynamic language, use Javascript, Python, ...
asm.js has been created by the need of have a small subset of javascript which can be easily optimized. If you can have a way to convert javascript to javascript/asm.js, asm.js is not needed anymore - that method can be inserted in js interpreters directly.
In theory, it is possible to convert / compile / transpile any JavaScript operation to asm.js if it can be expressed with the limited subset of the language present in asm.js. In practice, however, there is no tool capable of converting ordinary JavaScript to asm.js at the moment (June, 2017).
Either way, it would make more sense to convert a language with static typing to asm.js, because static typing is a requirement of asm.js and the lack thereof one of the features of ordinary JavaScript that makes it exceptionally hard to compile to asm.js.
Back in 2013, when asm.js was hot, there has been an attempt to compile a statically typed language similar to JavaScript, but both the language and the compiler seem to have been abandoned.
Today, in 2017, JavaScipt subsets TypeScript and Flow would be suitable candidates for conversion to asm.js, but the core dev teams of neither language is interested in such conversion. LLJS had a fork that could compile to asm.js, but that project is pretty much dead. ThinScript is a much more recent attempt and is based on TypeScript, but it doesn't appear to be active either.
So, the best and easiest way to produce asm.js code is still to write your code in C/C++ and convert / compile / transpile it. However, it remains to be seen whether we'll even want to do this in the forseeable future. Web Assembly may soon replace asm.js altogether and there's already popping up TypeScript-like languages like TurboScript and AssemblyScript that convert to Web Assembly. In fact, TurboScript was originally based on ThinScript and used to compile to asm.js, but they appear to have abandoned this feature.
It may be possible to convert regular JavaScript to asm.js by first compiling it to C or C++, and then compiling the generated code to asm.js using Emscripten. I'm not sure if this would be practical, but it's an interesting concept nonetheless.
There is also a compiler called NectarJS that compiles JavaScript to WebAssembly and ASM.js.
check this http://badassjs.com/post/43420901994/asm-js-a-low-level-highly-optimizable-subset-of
basically you need check that your code would be asm.js compatible (no coercion or type casting, you need to manage the memory, etc). The idea behind this is write your code in javascript, detect the bottle neck and do the changes in your code for use asm.js and aot compilation instead jit and dynamic compilation...is a bit PITA but you can still use javascript or other languages like c++ or better..in a near future, lljs.....

Would it make sense to run JavaScript on the Lua VM?

Lua is small and can be easily embedded. The current JavaScript VMs are quite big and hard to integrate into existing applications.
So wouldn't it be possible to compile JavaScript to Lua or Lua bytecode?
Especially for the constraints in mobile applications this seems like a good fit. Being able to easily integrate one of the most popular scripting languages into any iPhone or Android app would be great.
I'm not very familiar with Lua so I don't know if this is technically feasible.
With Luvit there is an active project trying to port the Node.js architecture to Lua. So the evented JavaScript world can't be too far away from whats possible in Lua.
The wins of compiling Javascript to Lua are not as great as you might first imagine. Javascript's semantics are very different to Lua's (the LuaJIT author cites Lua's design as one of the main reasons LuaJIT can compete so favourably with Javascript JIT compilers).
Take this code:
if("1" == 1)
{
print("Yes");
}
This prints "Yes" in Javascript. The equivalent code in Lua does not, as strings are never equal to numbers in Lua. This may seem like a small point, but it has a fundamental consequence: we can no longer use Lua's built-in equality testing.
There are two solutions we could take. We could rewrite 1 == "1" to javascript_equals(1, "1"). Or we could wrap every Javascript value in Lua, and use Lua's metatables to override the == operator behaviour.
So we already lost a some efficiency from Lua by mapping Javascript to it. This is a simple example, but it continues like this all the way down. For example all the operator rules are different between Javascript and Lua.
We would even have to wrap Javascript objects, because they aren't the same as Lua tables. For example Javascript objects only support string keys, and coerce any index to a string:
> a = {}
{}
> a[1] = "Hello"
'Hello'
> a["1"]
'Hello'
You also have to watch out for Javascript's scoping rules, vararg functions, and so on.
Now, all of these things are surmountable, if someone were to put the effort into a full compiler. However any efficiency gains would soon be drowned out. You would essentially end up building a Javascript interpreter in Lua. Most Javascript interpreters are written in C and already optimised for Javascript's semantics.
So, doing it for efficiency is a lost cause. There may be other reasons - such as supporting Javascript in a Lua-only environment, though even then if possible just writing Lua bindings to an existing Javascript interpreter would probably be less work.
If you want to have a play with a Javascript->Lua source-to-source translator, take a look at js2lua, which is a toy project I created some time back. It's not anywhere complete, but playing with it would certainly give some food for thought. It already includes a Javascript lexer, so that hard work is done already.

C interpreter written in javascript

Is there any C interpreter written in javascript or java ?
I don't need a full interpreter but I need to be able to do a step by step execution of the program and being able to see the values of variables, the stack...all that in a web interface.
The idea is to help C beginners by showing them the step by step execution of the program.
We are using GWT to build the interface so if something exists in Java we should be able to use it.
I can modify it to suit my needs but if I can avoid to write the parser / abstract-syntax tree walker / stack manipulation... that would be great.
Edit :
To be clear I don't want to simulate the complete C because some programs can be really tricky.
By step I mean a basic operation such as : expression evaluation, affectation, function call.
The C I want to simulate will contains : variables, for, while, functions, arrays, pointers, maths functions.
No goto, string functions, ctypes.h, setjmp.h... (at least for now).
Here is a prototype : http://www.di.ens.fr/~fevrier/war/simu.html
In this example we have manually converted the C code to a javascript representation but it's limited (expressions such as a == 2 || a = 1 are not handled) and is limited to programs manually converted.
We have a our disposal a C compiler on a remote server so we can check if the code is correct (and doesn't have any undefined behavior). The parsing / AST construction can also be done remotely (so any language) but the AST walking needs to be in javascript in order to run on the client side.
There's a C grammar available for antlr that you can use to generate a C parser in Java, and possibly JavaScript too.
There is em-scripten which converts LLVM languages to JS a little hacking on it and you may be able to produce a C interperter.
felixh's JSCPP project provides a C++ interpreter in Javascript, though with some limitations.
https://github.com/felixhao28/JSCPP
So an example program might look like this:
var JSCPP = require('JSCPP');
var launcher = JSCPP.launcher;
var code = 'int main(){int a;cin>>a;cout<<a;return 0;}';
var input = '4321';
var exitcode = launcher.run(code, input);
console.info('program exited with code ' + exitcode);
As of March 2015 this is under active development, so while it's usable there are still areas where it may continue to expand. Check the documentation for limitations. It looks like you can use it as a straight C interpreter with limited library support for now with no further issues.
I don't know of any C interpreters written in JavaScript, but here is a discussion of available C interpreters:
Is there an interpreter for C?
You might do better to look for any sort of virtual machine that runs on top of JavaScript, and then see if you can find a C compiler that emits the proper machine code for the VM. A likely one would seem to be LLVM; if you can find a JavaScript VM that can run LLVM, then you will be in great shape.
I did a few Google searches and found Emscripten, which translates C code into JavaScript directly! Perhaps you can do something with this:
https://github.com/kripken/emscripten/wiki
Perhaps you can modify Emscripten to emit a "sequence point" after each compiled line of C, and then you can make your simulated environment single-step from sequence point to sequence point.
I believe Emscripten is implementing LLVM, so it may actually have virtual registers; if so it might be ideal for your purposes.
I know you specified C code, but you might want to consider a JavaScript emulation of a simpler language. In particular, please consider FORTH.
FORTH runs on an extremely simple virtual machine. In FORTH there are two stacks, a data stack and a flow-of-control stack (called the "return" stack); plus some global memory. Originally FORTH was a 16-bit language but there are plenty of 32-bit FORTH implementations out there now.
Because FORTH code is sort of "close to the machine" it is easy to understand how it all works when you see it working. I learned FORTH before I learned C, and I found it to be a valuable learning experience.
There are several FORTH interpreters available in JavaScript already. The FORTH virtual machine is so simple, it doesn't take very long to implement it!
You could even then get a C-to-FORTH translator and let the students watch the FORTH virtual machine interpret compiled C code.
I consider this answer a long shot for you, so I'll stop writing here. If you are in fact interested in the idea, comment below and ask for more details and I will be happy to share them. It's been a long time since I wrote any FORTH code but I still remember it fondly, and I'd be happy to talk more about FORTH.
EDIT: Despite this answer being downvoted to a negative score, I am going to leave it up here. A simulation for educational purposes is IMHO more valuable if the simulation is simple and easy to understand. The simple stack-based virtual machine for FORTH is very simple, yet you could compile C code to run on it. (In the 80's there was even a CPU chip made that had FORTH instructions as its native machine code.) And, as I said, I personally studied FORTH when I was a complete beginner and it helped me to understand assembly language and C.
The question has no accepted answer, now over two years after it was asked. It could be that Loïc Février didn't find any suitable JavaScript interpreter. As I said, there already exist several JavaScript interpreters for the FORTH virtual machine. Therefore, this answer is a practical one.
C is a compiled language, not an interpreted language, and has features like pointers which JS completely doesn't support, so interpreting C in Javascript doesn't really make sense

GWT reduce compiled javascript size

I have found that the size of the compiled JavaScript grows faster than I had expected. Adding a few lines of Java code to my project can increase the script size in several Kbs.
At the moment my compiled project weights 1Mb. I'm not using any external libraries except for those for MVP (Activities & Places) , testing (JUnit) and logging.
I would like to know if there are any coding practices/recommendations to keep the compiled script as small as possible. I'm not refering to code splitting, but to coding techniques or patterns that can make the compiled JavaScript effectively smaller.
Many thanks
GWT uses a "pay as you go" design philosophy, and since you're not allowed to use reflection the compiler can statically prove (on a method-by-method basis) that a section of code is "reachable", and eliminate those that are not. For example, if you never use the remove() method on ArrayList, then that code does not get included in the resulting JavaScript.
If you are seeing several kilobyte jumps with the addition of just a few lines, it probably means that you've introduced the use of a new type (and possibly one that depends on other new types) that you had not yet been using. It might also mean that you've made a change to send this new type "over the wire" back to the server, in which case a GWT generator had to include JavaScript for marshaling that type, and any new types that are reachable via its "has-a" and "is-a" references.
So if it were me, I would begin there: when you catch a 2-line change making a multi-kilobyte increase, start by looking at the types and asking whether it is a type that you have used before, and whether you're sending a new type over the wire, and whether that type also depends on other types under the hood.
One final thought: in Ray Ryan's 2009 presentation at Google I/O he mentioned a superstition that he had picked up from the GWT compiler team, where they recommended against using generic types (I'm not speaking of Java Generics here, but rather supertypes) as RPC arguments & return values. In particular, instead of having your RPC call take or return a Map, have it take or return a HashMap instead. The belief is that the GWT generator can then narrow the amount of serialization code that it has to create at compile time (because it could, for example, refrain from generating serialization code for a TreeMap).
I hope this helps.
GWT creates different output versions for each supported browser, so when you say the project size is 1MB are you then referring to the combined size of these ? (each browser only download's the one it actually needs).
I have tried to experiment with the generated output when using various inheritance/class/generics constructs. Unfortunately the extra complexity introduced far outweighs the small size improvements gained (when fx. dumping generics).
I have been on some large GWT projects (+50.000 lines) and have found that code obfuscating coupled with turning on compression on the web server to be the simplest most effective way to minimize the downloads. If this does not shrink the code enough, then look into GWT's compilation report which you can use to pinpoint potential problematic classes and places to insert code splitting.

Categories

Resources