Dart vs JavaScript - Are they compiled or interpreted languages? - javascript

Is Dart considered to be a compiled or an interpreted language?
The same question holds for JavaScript.
The reason for the question:
I've been watching an interview with the founders of dart, and in 7:10 Lars Bak said that:
"When you [...] in a JavaScript program, you actually execute JavaScript before you start running the real program. In Dart, you don't execute anything before the first instruction in main is being executed".
It sounded to me that he's saying that JavaScript is a compiled language while Dart is an interpreted language. Is it true?
Isn't the Dart VM a compiler?

Depends on the definition of "interpreted" and "compiled" language. And even then, it always depends on the implementation.
What Lars meant is that JavaScript builds its class structures (and other global state) by executing code. In Dart the global state is described by the language syntax and thus only needs parsing (and even then most of it can be skipped first). As a consequence Dart programs can start executing "real" code faster than JavaScript programs.
This obviously only holds for the Dart VM, since programs that have been compiled to JavaScript must use JavaScript mechanisms to build their classes.
Edit (more details):
Take, for example, the following extremely simple class A:
In Dart:
class A {
final x;
A(this.x);
foo(y) => y + x;
}
In JavaScript:
function A(x) { this.x = x; }
A.prototype.foo = function(y) { return y + this.x; }
When the Dart VM starts up it starts by going through the program. It sees the class keyword, reads the class-name (A) and could then simply skip to the end of the class (by counting opening and closing braces, making sure they are not in strings). It does not care for the contents of A, until A is actually instantiated. Now, in reality, it actually looks through the class and finds all the members, but it does not read the contents of methods until they are needed. In any case: it does this in a very fast processing step.
In JavaScript things get more complicated: a fast VM can skip the actual body of the function A (similar to what Dart does), but when it sees A.prototype.foo = ... it needs to execute code to create the prototype object. That is, it needs to allocate a function object (A), look up its prototype property, change this object (adding a new property) with a new function object. In other words: in order to even see that you have a class you need to execute code.

Dart as programming language in its primary implementation can be presented as a virtual machine (VM), which is the runtime of programs written in that language.
Current virtual machine implemented as the "just-in-time" (JIT) runtime environment engine.
This means that program not interpreted but compiled. But this compilation process (translating source code to machine instructions) is stretched in time for an unknown period.
This allows virtual machine to defer performing certain operations indefinitely or never performing them.
Assume you have a very big and complex program with a lot of classes which may be never be used in current short lifetime session of program execution.
JIT compilation allow not compile all unused classes but just parse it to special tokens.
These tokens later will be used (on demand) for translating them to intermadiate language for constructing machine code.
This process is transparent for user of program. Compiled (to machine code) only that source code that required for the correct working of the program.
Some source code can be never compiled what save a lot of time.
Conclusion:
If Dart language used in its primary state as virtual machine then it compiled to machine code.

Dart compiles into JavaScript, and JavaScript is an interpreted language. Usually, by 'compiled' languages one understands languages which are compiled into platform-specific machine code, run right on the CPU and don't require an interpreter to run, which is not the case for neither JS nor Dart. So I would say that both JS and Dart are interpreted.

Related

Why is accessing myInstance.property1.subproperty2.subproperty3 costly in JavaScript but free in C++?

I have always understood it to be a cost-savings operation in JavaScript to avoid repeatedly referencing a nested property on an object. Instead of writing a.b.c.d over and over, you would favor let x = a.b.c.d; and then use x (what I've often heard colloquially called "caching the reference.")
It recently came up in conversation with a friend that such a thing would be completely unnecessary and foolish in C++.
Is that true? If so, why? I guess it has to do with the difference in the underlying language implementation between a C++ object and a JavaScript object, but what is the difference exactly?
JavaScript objects are closer to C++'s std::map (or std::unordered_map) than they are to C++ classes.
C++ has the advantage of having separate compilation and run steps. The compiler can really take as long as it likes to analyze large chunks of your program and heavily optimize them. When you're writing C++ you aren't really writing a program for the CPU to execute. You're describing the behavior of a program and the compiler will use that description to come up with a program for you. Your browser's JavaScript runtime (likely a JIT compiler) simply doesn't have the time to do the same level of analysis and optimization. It has to compile and run your program quickly enough that users don't perceive any delay. That's not to say a JavaScript runtime won't do any optimization, but it will tend to be more incremental and localized than what a C++ compiler that takes 20 minutes to compile a program can do.
All of the attributes of a C++ class are known at compile time. When you access an attribute of an object in C++, the compiler will resolve that access at compile time; likely to a handful of memory loads or a single function call instruction. Since it's all resolved at compile time, it doesn't matter how deeply an attribute lookup is nested. The runtime performance will be the same. Additionally, the compiler will do that sort of memorization for you, likely by keeping a repeatedly-accessed attribute in a register.
The same is not true of JavaScript. JavaScript objects don't have a defined set of properties. They can be added and removed throughout the lifetime of the object. That means that the JavaScript runtime has to track an object's properties using some sort of associative data structure (likely a hash table). When you request an attribute of an object, the runtime has to look through that data structure to find the value that you want, and it has to do that lookup for every level of nesting.

The confusion with JIT compilation in V8

I was studying the inner workings of V8 and came across the term JIT compiler. Initially, I read in this article https://www.quora.com/How-does-the-JIT-compiler-work-in-JS that JIT compiler in V8 is called "Ignition" which is interpreter. Then, I came to conclusion that JIT compiler is just interpreter. But later I found another article https://blog.logrocket.com/how-javascript-works-optimizing-the-v8-compiler-for-efficiency/ describing jit-compilation as the combination of both interpreter and compiler. Guys, is JIT compiler a really a combination of interpreter and compiler? or Is JIT compiler is just interpreter only?
V8 developer here. Just to clarify and expand what commenters have been pointing out already:
"JIT" means "just in time", and means that some execution environment dynamically (i.e. at runtime) decides to produce something (typically machine code -- colloquially, "JIT" tends to mean "just-in-time compilation", although if you decide to prepare a meal exactly when you're hungry and then eat it right away when it's done, then that's technically also "JIT" preparation.) The canonical opposite would be a language like C/C++, which is compiled by the developer, long before being delivered to and executed by the user. Another "opposite" in a different direction is an execution environment that executes something without producing machine code dynamically. Such environments are typically called "interpreters".
In the past, V8 used to always produce machine code. It simply had no way to execute JavaScript that did not involve compiling it to machine code first. Obviously this happened on the client, so it was a textbook example of a just-in-time compiler (or, more accurately, a set of several compilers... oh well, details!).
In recent years, V8 has had an interpreter as its first execution tier. Now usage of terms gets complicated, because this interpreter "compiles" JavaScript "just in time" to bytecode (which is then interpreted), but when someone says "JIT compiler", they usually mean that it's not an interpreter.
V8 also has an optimizing compiler that produces machine code. It runs at runtime (when a function is considered hot), so it's a just-in-time compiler.
Transpile is to compost and "pile on to an ARCHITECTURE-VENDOR-OPERATING_SYSTEM-ENVIRONMENT target."
target triple = "x86_64-apple-macosx10.7.0"
-march=x -mcpu=y, and -mattr=a,-b,+c.
ExecutionEnvironment/TargetSelect/selectTarget()
Can ...[one compile and execute with a triple target]?
For this reason, I would call v8 an interpreter/compiler, yet not sufficiently an interpreter yet for allegedly & sufficiently targeting the cloudflare service worker with rust-llvm wraps, alone - after all & to boot an interpreter is for application programming or graphical user interfaces (not a service worker for a v8 'edge' server). I believe the abstract syntax tree, target, and scope that global receiver mirrors all to be diametrically synonymous thus far in[ my] research.

Equivalent terminology for 'compile time' in Javascript

The term compile time is used with compiled languages (such as C++) to indicate the point in which source code is going through the compilation process.
The term runtime is used to indicate the point in which the application is opened and being run by a user.
If, for example, we wrote a simple game that rendered terrain based on some list of vertices... if this terrain data was fetched from a server, I might say that the state of the terrain is unknown until runtime. However if it was envisioned that we'd only have one terrain 'model,' and configured that directly in the source code, I might say the state of the terrain is known at compile time (would I be wrong in saying this?).
In Javascript, what would the equivalent terminology be for compile time? My own solution was to call it design time, however I'd be interested to know if there is correct terminology for this.
It might make more sense, in the case of your example, to just say that it is 'hard-coded', as this seems to be more accurate for what you are describing. Things that are hard-coded are always known at compile-time, but things known at compile-time are not necessarily hard-coded (in C++, for example, they can be generated with constexpr functions, or injected using build parameters).
The closest thing to 'compile-time' in Javascript I would probably call 'build-time', as you often have some sort of building step in Javascript, whether this a heavyweight build process using WebPack, just simple minification, or even just collecting a particular version of your application's files into some sort of distributable package. Even if you're not actually performing such a step at all, I think people would generally understand what was meant by this.
Depends on the implementation, but some engines have "just in time" (JIT) compilation. See JavaScript Just In Time compilation.
However, the things that you can do as the code writer, rather than the engine, are mostly limited to bundling and minification to reduce fetching all of the scripts that are needed to run your program.
The closest comparison would likely be how you can dynamically and statically link in compiled languages.
There is no real equivalent to "compile time" in Javascript.
A common term to refer to e.g. "hoisting" or variable declarations without an assigned value is "creation phase" (vs. "execution phase").
This works well to explain and understand javascript. However, there not necessarily really is such a specific "phase" happening.
The Language Specification ECMA-262 doesn't use these terms either. Instead, it only uses relative terms, i.e. it specifies what should happen before or after something else, and apparently leaves the implementation to the engine developers.
E.g. commonly explained in the internet in a way like:
a var is declared in the "creation phase", and instatiated in the "execution phase"
But ECMA only specifies it more like this way:
a var (/let/const/...) is created when X, but stays 'undefined' as long as Y, ... (but may not be accessed until Z, ...)
Examples for such relative terms:
ECMA-262, 11th, chapter 13.3.2:
Var variables are created when their containing Lexical Environment is instantiated ...
... is assigned the value ... when the VariableDeclaration is executed, not
when the variable is created
ECMA-262, 11th, chapter 8.1:
a new Lexical Environment is created each time such code is evaluated

How can I have a function of type (arr: T[]) => T in C++ that compile to WebAssembly?

I would like to write a function head that return the first value of a an array with a type signature (arr: T[]) => T (Typescript pseudo code).
The idea is to compile the C++ function to WebAssembly using Emscripten and use this head function in my javascript app.
I know C++ template would provide the right tool for such an abstraction but I wonder if templates would work as they operate at compile time.
PS: I am a C++ beginner, any link to any ressource is welcome, I would like to learn.
WebAssembly doesn't support "generics" or "templates" per-se, it only has types i32, i64, f32, and f64.
In pure C++ that's fine because your compile will just instantiate all the template specializations you need, and then use them within WebAssembly. If you inter-operate across languages (say C++ in WebAssembly to JavaScript or TypeScript) then you can explicitly specialize your templates and export them from your .wasm file so that JavaScript / TypeScript can call that specialization. Of course that means you have to know what you'll need up front!
One thing you could do, but is totally impractical, is just-in-time generate the .wasm file at runtime when you figure out what template instantiation you actually need. That's impractical because tooling just isn't there right now, you'd need at least parts of a C++ compiler running in WebAssembly, and then you'd need to patch your WebAssembly.Table at runtime (which is totally doable... just not actively done these days).
For your specific usecase though (return the first element of an array) I'm not sure you can do much! Because WebAssembly's types are so limited you can only deal with things that fit in 32 or 64 bits if you must pass through as parameters. Even then, your array can't just generically expand to arguments because WebAssembly parameter counts are pre-determined at compilation time (binding them to JavaScript can drop / getValue on them, but you really don't want that). What you want is probably to pass things through the Memory, which is similar to dealing with strings (in that strings are an array of characters).

JavaScript compilation in V8

In the V8 home (the Google's JavaScript engine) we read this:
V8 compiles and executes JavaScript source code
Does it mean that JavaScript is not an interpreted language in V8?
Does V8 use a just-in-time compilation approach for JavaScript?
Edit: There is another existing question which already address my first question, but not the second.
Does it mean that JavaScript is not an interpreted language in V8?
The answer to this is "it depends".
Historically V8 has compiled directly to machine code using its "full-codegen" compiler, which produces unoptimized code which uses inline caching to implement most operations such as arithmetic operations, load and stores of variables and properties, etc.
The code generated by full-codegen keeps track of how "hot" each function is, by adjusting a counter when the function is called and when it jumps back to the top of loops.
It also keeps track of the types of the variables used in each expression.
If it determines that a function (or part of a function) is very hot, and it has collected enough type information, it triggers the "Crankshaft" compiler which generates much better code.
However, the V8 developers are actively working on moving to a different system where they start off with an interpreter called "Ignition" and then use a compiler called "Turbofan" to produce optimized code for hot functions.
Here are a couple of posts from the V8 developers blog describing this:
Firing up the Ignition Interpreter
Help us test the future of V8
Does V8 use a just-in-time compilation approach for JavaScript?
Yes, in a number of ways.
Firstly, it has a lazy parsing and lazy compilation mechanism. This means that when it parses a Javascript source file it parses the outermost scope eagerly, generating the full-codegen code immediately.
However, for functions defined within the file, it skips over them and just records the name of the function and the location of its source code. It generates a dummy function which simply calls into the V8 runtime to trigger the actual compilation of the function.
Secondly, it has a two stage compiler pipeline as described above, using either full-codegen+crankshaft or ignition+turbofan.
When the compilation is triggered it will initially generate unoptimized code or ignition bytecode (which it can do very quickly), and then later if the code is hot it triggers an optimized re-compilation (which is much slower but generates much better code).

Categories

Resources