To improve performance JavaScript engines sometimes only fully parse functions when they are actually called.
For example, from the Spidermonkey source code:
Checking the syntax of a function is several times faster than doing a full parse/emit, and lazy parsing improves both performance and memory usage significantly when pages contain large amounts of code that never executes (which happens often).
What steps can the parser skip while still being able to validate the syntax?
It appears that in Spidermonkey some of the savings come from not emitting bytecode, like after a full parse. Does a full parse in e.g. V8 also include generating machine code?
First off a clarification: the two steps are called "pre-parsing" and "full parsing". "Lazy parsing" describes the strategy of doing the former first, and then the latter when needed.
The two big reasons why pre-parsing is faster are:
it doesn't build the AST (abstract syntax tree), which is usually the output of the parser
it doesn't do scope resolution for variables.
There are a few other internal steps done by the parser in preparation for code generation that aren't needed when only checking for errors, but the above two are the major points.
(Full) parsing and code generation are separate steps.
Related
I read article about how JavaScript Engine works, but there is thing that confusing me, it says the following:
JavaScript code first parsed
The source code translated to bytecode
The bytecode gets optimized
The code generator takes bytecode and translates into low level assembly code
Is last step true?
Does above how JavaScript Engines work like "V8"?
(V8 developer here.)
Yes, the JavaScript engines used in "modern" (since 2008) browsers have just-in-time compilers that compile JavaScript to machine code. That's probably what that article meant to say. If you want to distinguish the terms "assembly language" and "machine code", then the former would be the human-readable form (such as mov eax, ebx, written by humans and produced by disassemblers) and the latter would be the binary-encoded form that the CPU understands (such as 0x89 0xD8, produced by compilers/assemblers). I'd say that the term "assembly code" is sufficiently ambiguous that it could refer to either, or could imply that you don't want to distinguish.
I find the third step in your description more misleading: byte code is typically not optimized. The bytecode interpreter, if it exists, is usually the engine's first execution tier, and its purpose is to start execution as soon as possible, without first spending any time on optimizations. If a function runs hot enough, the engine will eventually decide to spend the time to optimize it to machine code (depending on the engine, possibly in a succession of several increasingly powerful but costly compilers). These later, optimizing tiers may or may not take the bytecode as input; alternatively they can parse the source again to build an AST (taking V8 as a specific example, it used to do the latter and is currently doing the former).
Side note: that article is pretty silly indeed. Example:
techniques like inlining (removing white space)
That's so wrong that it's outright funny :-D
The American Fuzzy Lop, and the conceptually related LLVM libfuzzer not only generate random fuzzy strings, but they also watch branch coverage of the code under test and use genetic algorithms to try to cover as many branches as possible. This increases the hit frequency of the more interesting code further downstream as otherwise most of the generated inputs will be stopped early in some deserialization or validation.
But those tools work at native code level, which is not useful for JavaScript applications as it would be trying to cover the interpreter, but not really the interpreted code.
So is there a way to fuzz JavaScript (preferably in browser, but tests running in node.js would help too) with coverage guidance?
I looked at the tools mentioned in this old question, but those that do javascript don't seem to mention anything about coverage profiling. And while radamsa mentions optionally pairing it with coverage analsysis, I haven't found any documentation on how to actually do it.
How can one fuzz-test java-script (in browser) application with coverage guidance?
Fuzzing a JavaScript engine draws a lot of attention as the number of browser users is about 4 Billion. Several works have been done to find bugs in JS engines, including popular large engines, e.g, v8, webkit, chakracore, gecko, or some small embedded engines, like jerryscript, QuickJS, jsish, mjs, mujs.
It is really difficult to find bugs using AFL as the mutation mechanisms provided by AFL is not practical for JS files, e.g, bitflip can hardly be a valid mutation. Since JS is a structured language, several works using ECMAScript grammar to mutate/generate JS files(seeds):
LangFuzz parses sample JS files and splits them into code fragments. It then recombines the fragments to produce test cases.
jsfunfuzz randomly generates syntactically valid JS statements from JS grammar manually written for fuzzing.
Dharma is a generation-based, context-free grammar fuzzer, generating files based on given grammar.
Superion extends AFL using tree-based mutation guided by JS grammar.
The above works can easily pass the syntax checks but fail at semantic checks. A lot of generated JS seeds are semantically invalid.
CodeAlchemist uses a semantics-aware approach to generate code segments based on a static type analysis.
There are two levels of bugs related to JS engines: simple parser/interpreter bugs and deep inside logic bugs. Recently, there is a trend that the number of simple bugs decreases while more and more deep bugs come out.
DIE uses aspect-preserving mutation to preserves the desirable properties of CVEs. It also using type analysis to generate semantic-valid bugs.
Some works focus on mutating intermediate representations.
Fuzzilli is a coverage-guided fuzzer based on mutation on the IR level. The mutations on IR can guarantee semantic validity and can be transferred to JS.
Fuzzing JS is an interesting and hot topic according to the top conference of security/SE in recent years. Hope this information is helpful.
I'm just wondering is there a difference in performance using removing spaces before and after equal signs. Like this two code snippets.
first
int i = 0;
second
int i=0;
I'm using the first one, but my friend who is learning html/javascript told me that my coding is inefficient. Is it true in html/javascript? And is it a huge bump in the performance? Will it also be same in c++/c# and other programming languages? And about the indent, he said 3 spaces is better that tab. But I already used to code like this. So I just want to know if he is correct.
Your friend is a bit misguided.
The extra spaces in the code will make a small difference in the size of the JS file which could make a small difference in the download speed, though I'd be surprised if it was noticeable or meaningful.
The extra spaces are unlikely to make a meaningful difference in the time to parse the file.
Once the file is parsed, the extra spaces will not make any difference in execution speed since they are not part of the parsed code.
If you really want to optimize download or parse speed, the way to do that is to write your code in the most readable fashion possible for best maintainability and then use a minimizer for the deployed code and this is a standard practice by many web sites. This will give you the best of both worlds - maintainable, readable code and minimum deployed size.
A minimizer will remove all unnecessary spacing, shorten the names of variables, remove comments, collapse lines, etc... all designed to make the deployed code as small as possible without changing the run-time meaning of the code at all.
C++ is a compiled language. As such, only the compiler that the developer uses sees any extra spaces (same with comments). Those spaces are gone once the code has been compiled into native code which is what the end-user gets and runs. So, issues about spaces between elements in a line are simply not applicable at all for C++.
Javascript is an interpreted language. That means the source code is downloaded to the browser and the browser then parses the code at runtime into some opcode form that the interpreter can run. The spaces in Javascript will be part of the downloaded code (if you don't use a minimizer to remove them), but once the code is parsed, those extra spaces are not part of the run-time performance of the code. Thus, the spaces could have a small influence on the download time and perhaps an even smaller influence on the parse time (though I'm guessing unlikely to be measurable or meaningful). As I said above, the way to optimize this for Javascript is to use spaces to enhance readability in the source code and then run a minimizer over the code to generate a deployed version of the code to minimize the deployed size of the file. This preserves maximum readability and minimizes download size.
There is little (javascript) to no (c#, c++, Java) difference in performance. In the compiled languages in particular, the source code compiles to the exact same machine code.
Using spaces instead of tabs can be a good idea, but not because of performance. Rather, if you aren't careful, use of tabs can result in "tab rot", where there are tabs in some places and spaces in others, and the indentation of the source code depends on your tab settings, making it hard to read.
I'm writing a parser for a templating language which compiles into JS (if that's relevant). I started out with a few simple regexes, which seemed to work, but regexes are very fragile, so I decided to write a parser instead. I started by writing a simple parser that remembered state by pushing/popping off of a stack, but things kept escalating until I had a recursive descent parser on my hands.
Soon after, I compared the performance of all my previous parsing methods. The recursive descent parser was by far the slowest. I'm stuck: Is it worth using a recursive descent parser for something simple, or am I justified in taking shortcuts? I would love to go the pure regex route, which is insanely fast (almost 3 times faster than the RD parser), but is very hacky and unmaintainable to a degree. I suppose performance isn't terribly important because compiled templates are cached, but is a recursive descent parser the right tool for every task? I guess my question could be viewed as more of a philosophical one: to what degree is it worth sacrificing maintainability/flexibility for performance?
Recursive descent parsers can be extremely fast.
These are usually organized with a lexer, that uses regular expressions to recognize langauge tokens that are fed to the parser. Most of the work in processing the source text is done character-by-character by the lexer using the insanely fast FSAs that the REs are often compiled into.
The parser only sees tokens occasionally compared to the rate at which the lexer sees characters, and so its speed often doesn't matter. However, when comparing parser-to-parser speeds, ignoring time required to lex the tokens, recursive descent parsers can be very fast because they implement the parser stack using function calls which are already very efficient compared to general parser push-current-state-on-simulated-stack.
So, you can have you cake and eat it, too. Use regexps for the lexemes. Use the parser (any kind, recursive descent are just fine) to process lexemes. You should be pleased with the performance.
This approach also satisifies the observation made by other answers: write it in a way to make it maintainable. Lexer/Parser separation does this very nicely, I assure you.
Readability first, performance later...
So if your parser makes code more readable then it is the right tool
to what degree is it worth sacrificing
maintainability/flexibility for
performance?
I think it's very important to write clear maintanable code as a first priority. Until your code not only indicates that it is a bottleneck, but that your application performance also suffers from it, you should always consider clear code to be the best code.
It's also important to not reinvent the wheel. The comment on taking a look at another parser is a very good one. There often found common solutions for writing routines such as this.
Recusion is very elegant when applied to something applicible. In my own experiance slow code due to recursion is an exception, not the norm.
A Recursive Descent Parser should be faster
...or you're doing something wrong.
First off, your code should be broken into 2 distinct steps. Lexer + Parser.
Some reference examples online will first tokenize the entire syntax first into a large intermediate data structure, then pass that along to the parser. While good for demonstration; don't do this, it doubles time and memory complexity. Instead, as soon as a match is determined by the lexer, notify the parser of either a state transition or state transition + data.
As for the lexer. This is probably where you'll find your current bottleneck. If the lexer is cleanly separated from your parser, you can try wapping between Regex and non-Regex implementations to compare performance.
Regex isn't, by any means, faster than reading raw strings. It just avoids some common mistakes by default. Specifically, the unnecessary creation of string objects. Ideally, your lexer should scan your code and produce an output with zero intermediate data except the bare minimum required to track state within your parser. Memory-wise you should have:
Raw input (ie source)
Parser state (ex isExpression, isSatement, row, col)
Data (Ex AST, Tree, 2D Array, etc).
For instance, if your current lexer matches a non-terminal and copies every char over one-by one until it reaches the next terminal; you're essentially recreating that string for every letter matched. Keep in mind that string data types are immutable, concat will always create a new string. You should be scanning the text using pointer arithmetic or some equivalent.
To fix this problem, you need to scan from the startPos of the non-terminal to the end of the non-terminal and copy only when a match is complete.
Regex supports all of this by default out of the box, which is why it's a preferred tool for writing lexers. Instead of trying to write a Regex that parses your entire grammer, write one that only focuses on matching terminals & non-terminals as capture groups. Skip tokenization, and pass the results directly into your parser/state machine.
The key here is, don't try to use Regex as a state machine. At best it will only work for Regular (ie Chomsky Type III, no stack) declarative syntaxes -- hence the name Regular Expression. For example, HTML is a Context-Free (ie Chomsky Type II, stack based) declarative syntax, which is why a Rexeg alone is never enough to parse it. Your grammar, and generally all templating syntaxes, fall into this category. You've clearly hit the limit of Regex already, so you're on the right track.
Use Regex for tokenization only. If you're really concerned with performance, rewrite your lexer to eliminate any and all unnecessary string copying and/or intermediate data. See if you can outperform the Regex version.
The key being. The Regex version is easier to understand and maintain, whereas your hand-rolled lexer will likely be just a tinge faster if written correctly. Conventional wisdom says, do yourself a favor and prefer the former. In terms of Big-O complexity, there shouldn't be any difference between the two. They're two forms of the same thing.
I have found that the size of the compiled JavaScript grows faster than I had expected. Adding a few lines of Java code to my project can increase the script size in several Kbs.
At the moment my compiled project weights 1Mb. I'm not using any external libraries except for those for MVP (Activities & Places) , testing (JUnit) and logging.
I would like to know if there are any coding practices/recommendations to keep the compiled script as small as possible. I'm not refering to code splitting, but to coding techniques or patterns that can make the compiled JavaScript effectively smaller.
Many thanks
GWT uses a "pay as you go" design philosophy, and since you're not allowed to use reflection the compiler can statically prove (on a method-by-method basis) that a section of code is "reachable", and eliminate those that are not. For example, if you never use the remove() method on ArrayList, then that code does not get included in the resulting JavaScript.
If you are seeing several kilobyte jumps with the addition of just a few lines, it probably means that you've introduced the use of a new type (and possibly one that depends on other new types) that you had not yet been using. It might also mean that you've made a change to send this new type "over the wire" back to the server, in which case a GWT generator had to include JavaScript for marshaling that type, and any new types that are reachable via its "has-a" and "is-a" references.
So if it were me, I would begin there: when you catch a 2-line change making a multi-kilobyte increase, start by looking at the types and asking whether it is a type that you have used before, and whether you're sending a new type over the wire, and whether that type also depends on other types under the hood.
One final thought: in Ray Ryan's 2009 presentation at Google I/O he mentioned a superstition that he had picked up from the GWT compiler team, where they recommended against using generic types (I'm not speaking of Java Generics here, but rather supertypes) as RPC arguments & return values. In particular, instead of having your RPC call take or return a Map, have it take or return a HashMap instead. The belief is that the GWT generator can then narrow the amount of serialization code that it has to create at compile time (because it could, for example, refrain from generating serialization code for a TreeMap).
I hope this helps.
GWT creates different output versions for each supported browser, so when you say the project size is 1MB are you then referring to the combined size of these ? (each browser only download's the one it actually needs).
I have tried to experiment with the generated output when using various inheritance/class/generics constructs. Unfortunately the extra complexity introduced far outweighs the small size improvements gained (when fx. dumping generics).
I have been on some large GWT projects (+50.000 lines) and have found that code obfuscating coupled with turning on compression on the web server to be the simplest most effective way to minimize the downloads. If this does not shrink the code enough, then look into GWT's compilation report which you can use to pinpoint potential problematic classes and places to insert code splitting.