JS switch statements: Does most-likely-case optimisation matter? - javascript

Assuming this construct
switch(foo) {
case "foo":
// ...
break;
case "bar":
// ...
break;
...
default:
// ...
break;
}
or a similar conditional block (generic logical operators only), would it make sense, performance-wise, to actually put the most likely condition first?
And if so, what's the threshold where it begins to make sense vs. the "trouble" to figure out what the most likely condition will be?

This sort of micro-optimization will cause more problems than it will solve. With Javascript, it's relatively hard to assess such constructs due to the many implementation details that JS engines abstract. Profiling won't probably be very insightful either.
JS is a scripting language, not C. Make your code readable and concise — incidentally, this should always be your mantra, no matter what language you write in.

The answer is yes, it would make sense. If you knew something had a much higher percentage of being true why not put it in the first condition? I believe it's sound programming even in a scripting language.
Just because it makes sense, though, doesn't make it necessary. If it's code that isn't run frequently, it's negligible how important the order matters. However, code within a large loop with several condition statements could slow down an algorithm slightly.
The time to worry about it would be when you notice a lag. Even then, refactoring the order of conditions would probably be on the bottom of my list of things to do.

The threshold where it would buy you anything is if, when you randomly pause it several times, you see it in the process of executing that switch statement, as opposed to something else entirely.

Related

JavaScript is compiled or interpreted language or both?

I'm sorry for this silly question but I'm confusing about that.
I have been reading a you don't know JS yet book and the book said that
JS is most accurately portrayed as a compiled language.
And they explain with some example and that makes sense for me
But when I'm searching on the internet. Most people suppose that JS is an interpreted language.
I read that JS engine uses many kinds of tricks to handling JS programs like JIT, hot-recompile.
So should I consider Javascript to be both compiled and interpreted language?
UPDATE:
When JavaScript first came in 1995-96, Brendan Eich created the very first JS-Engine called spider-monkey (Still used in Mozilla Firefox). At this time JavaScript was created keeping Browsers in mind. So that any file coming from the Servers would be quickly interpreted and shown by the Browsers.
Interpreter was a best choice to do so, since Interpreters
executes code line by line and shows the results immediately.
But as time progressed Performance became an issue, It was becoming slower and slower. The problem with Interpreters is that when you are running the same code over and over again in a loop like this one:
const someCalculation = (num1, num2) => {
return num1 + num2;
};
for (let i = 0; i < 10000; i++) {
someCalculation(5, 6); // 11
}
It can get really really slow.
So the best option was introducing the Compiler,
which actually helps us here. It takes a little bit more time to start up, because it has to go through the compilation step at the beginning - Go through our code, understand it and spit it out into another language. But the Compiler would be smart enough. When it sees the code like above ( where we loop over and it has the same inputs , returning the same outputs), it can actually just simplify this code and instead of calling this function multiple times it can just replace this function with output for the function. Something like this.
const someCalculation = (num1, num2) => {
return num1 + num2;
};
for (let i = 0; i < 10000; i++) {
11; // And it will not call someCalculation again and again.
}
Because the Compiler does not repeat the translation for each pass through in that loop, the code generated from it is actually faster.
And these sorts of edits that Compilers do are called Optimizations
Thus Javascript combined both Interpreter and Compiler to get the best of both the world. So Browsers started mixing compilers called JIT-Compilers for just-in-time compilations to make the engines faster.
In the Image you can see a Profiler which keeps a watch on the repeated code and passes it on to the Compiler for Code Optimizations.
This means that the Execution Speed of Javascript Code that we entered
into the engine is going to
gradually improve because the Profiler and Compiler are constantly
making updates and changes to our Byte code in order to be as
efficient as possible. So Interpreter allows us to run the code right
away while Profiler and Compiler allows us to optimize this code as
we are running.
Now Let's come to some conclusions:
Now that we know how JS-Engine works underneath the hood, we as Programmers can write more Optimized Code - code that the Compiler can take and run it faster than our regular Javascript. However,
We need to make sure we don't confuse this Compiler- because the Compiler is not perfect, it can make mistakes and it may try to optimize the code that exactly does the opposite. And if it makes a mistake and it does something unexpected, it does something called De-Optimization which takes even more longer time to revert it back to the interpreter.
NOW THE BIG QUESTIONS : Is Javascript an interpreted language?
ANSWER : Yes, initially when Javascript first came out, you had a Javascript engine such as spider-monkey- created by Brenden Eich that interpretted javascript to Byte Code and this Javascript engine was able to run inside of our browser to tell our Computers what to do.
But things have evolved now, we don't just have interpreters, we also use compilers to optimize our code. So, this is a common misconception.
When someone says Javascript is an interpreted language, yes there is
some truth to it but it depends on the implementation. You can make an
implementation of Javascript Engine that perhaps only compiles.
Technically it all matters depending on the implementation.
Javascript is initially an interpreted language. When it encounters a bit of code for the first time it reads the tokens one by one and executes them exactly according to specification. This is level 0.
If a bit of code is executed often, let's say 100 times (exact number depends on the browser) it is considered "warm". The browser precomputes the tokenization and basic operations into a slightly faster bytecode. At this stage, no assumptions are made and the bytecode is completely equivalent to the original code. This is level 1.
If code is executed more often, let's say 10,000 times and the types of the parameters are always the same, a further compilation step can be executed. Many javascript operators do wildly different things depending on the types. Every operator has some logic to check what variant of the operator to perform (eg adding or concatenation). Also different amounts of memory need to be allocated to different types of objects. Performing the type checks once at the top of the function and allocating all the memory at once if much faster. This is level 2.
Depending on the browser, there might be more optimization levels, usually by making stricter assumptions about the parameters. There might be a more efficient addition for integers. Of course, if you ever call the function with a different variable type, the browser has to execute unoptimized raw JS again.
Practical Advice
This all happens under the hood and as a programmer you most likely won't ever have to worry about this. The optimization will never change the big O speed of your program, which is usually the cause of most slow software. You might be able to slightly increase the speed by making sure the types of the parameters of your most called functions are consistent, though not enough to be worth the trouble.
Check the MDN
this is the place where you get the most accurate information
To quote MDN on the topic:
JavaScript (JS) is a lightweight, interpreted, or just-in-time compiled programming language
Basically since JS is used in multiple environments it can be either one or the other.

Researching Javascript DEOPT reasons in Chrome 79 for a pet project

I've been tinkering with a Javascript chess engine for a while. Yeah yeah I know (chuckles), not the best platform for that sorta thing. It's a bit of a pet project, I'm enjoying the academic exercise and am intrigued by the challenge of approaching compiled language speeds. There are other quirky challenges in Javascript, like the lack of 64bit integers, that make it unfit for chess, but paradoxically interesting, too.
A while back I realized that it was extremely important to be careful with constructs, function parameters, etc. Everything matters in chess programming, but it seems that a lot matters when working with JIT compilers (V8 Turbofan) via Javascript in Chrome.
Via some traces, I'm seeing some eager DEOPTs that I'm having trouble figuring out how to avoid.
DEOPT eager, wrong map
The code that's referenced by the trace:
if (validMoves.length) { ...do some stuff... }
The trace points directly to the validMoves.length argument of the IF conditional. validMoves is only ever an empty array [] or an array of move objects [{Move},{Move},...]
Would an empty array [] kick off a DEOPT?
Incidentally, I have lots of lazy and soft DEOPTs, but if I understand correctly, these are not so crucial and just part of how V8 wraps its head around my code before ultimately optimizing it; in --trace-opt, the functions with soft,lazy DEOPTs, do seem to eventually be optimized by Turbofan, and perhaps don't hurt performance in the long run so much. (For that matter, the eager DEOPT'ed functions seem to eventually get reoptimized, too.) Is this a correct assessment?
Lastly, I have found at times that by breaking up functions that have shown DEOPTs, into multiple smaller function calls, I've had notable performance gains. From this I've inferred that the larger more complex functions are having trouble getting optimized and that by breaking them up, the smaller compartmentalized functions are being optimized and thus feeding my gains. Does that sound reasonable?
the lack of 64bit integers
Well, there are BigInts now :-)
(But in most engines/cases they're not suitable for high-performance operations yet.)
Would an empty array [] kick off a DEOPT?
Generally no. There are, however, different internal representations of arrays, so that may or may not be what's going on there.
[lazy, soft, eager...] Is this a correct assessment?
Generally yes. Usually you don't have to worry about deopts, especially for long-running programs that experience a few deopts early on. This is true for all the adjectives that --trace-deopt reports -- those are all just internal details. ("eager" and "lazy" are direct opposites of each other and simply indicate whether the activation of the function that had to be deoptimized was top-of-stack or not. "soft" is a particular reason for a deopt, namely a lack of type feedback, and V8 choosing to deoptimize instead of generating "optimized" code despite lack of type feedback, which wouldn't be very optimized at all.)
There are very few cases where you, as a JavaScript developer, might want to care about deopts. One example is when you've encountered a case where the same deopt happens over and over again. That's a bug in V8 when it happens; these "deopt loops" are rare, but occasionally they do occur. If you have found such a case, please file a bug with repro instructions.
Another case is when every CPU cycle matters, especially during startup / in short-running applications, and some costly functions gets deoptimized for a reason that might be avoidable. That doesn't seem to be your case though.
[breaking up functions...] Does that sound reasonable?
Breaking up functions can be beneficial, yes; especially if the functions you started with were huge. Generally, functions of all sizes get optimized; obviously larger functions take longer to optimize. This is a tricky area with no simple answers; if functions are too small then that's not helpful for performance either. V8 will perform some inlining, but the decisions are based on heuristics that naturally aren't always perfect. In my experience, manually splitting functions can in particular pay off for long-running loops (where you'd put the loop into its own function).
EDIT: to elaborate on the last point as requested, here's an example: instead of
function big() {
for (...) {
// long-running loop
}
/* lots more stuff... */
}
You'd split it as:
function loop() {
for (...) {
// same loop as before
}
}
function outer() {
loop();
/* same other stuff as before */
}
For a short loop, this is totally unnecessary, but if significant time is spent in the loop and the overall size of the function is large, then this split allows optimization to happen in more fine-grained chunks and with fewer ("soft") deopts.
And to be perfectly clear: I only recommend doing this if you are seeing a particular problem (e.g.: --trace-opt telling you that your biggest function is optimized two or more times, taking a second each time). Please don't walk away from reading this answer thinking "everyone should always split their functions", that's not at all what I'm saying. In extreme cases of huge functions, splitting them can be beneficial.

Use single line functions or repeat code

I was writing a Javascript code in which I needed to show and hide some sections of a web. I ended up with functions like these:
function hideBreakPanel() {
$('section#break-panel').addClass('hide');
}
function hideTimerPanel() {
$('section#timer-panel').addClass('hide');
}
function showBreakPanel() {
resetInputValues();
$('section#break-panel').removeClass('hide');
}
function showTimerPanel() {
resetInputValues();
$('section#timer-panel').removeClass('hide');
}
My question is related with code quality and refactoring. When is better to have simple functions like these or invoke a Javascript/jQuery function directly? I suppose that the last approach have a better performance, but in this case performance is not a problem as it is a really simple site.
I think you're fine with having functions like these, after all hideBreakPanel might later involve something more than applying a class to an element. The only thing I'd point out is to try to minimize the amount of repeated code in those functions. Don't worry about the fact that you're adding a function call overhead, unless you're doing this in a performance-critical scenario, the runtime interpreter couldn't care less.
One way you could arrange the functions to avoid repeating yourself:
function hidePanel(name) {
$('section#' + name + '-panel').addClass('hide');
}
function showPanel(name) {
resetInputValues();
$('section#' + name + '-panel').removeClass('hide');
}
If you absolutely must have a shorthand, you can then do:
function hideBreakPanel() {
hidePanel("break");
}
Or even
var hideBreakPanel = hidePanel.bind(hidePanel, "break");
This way you encapsulate common functionality in a function, and you won't have to update all your hide functions to ammend the way hiding is done.
My question is related with code quality and refactoring. When is
better to have simple functions like these or invoke a
Javascript/jQuery function directly? I suppose that the last approach
have a better performance, but in this case performance is not a
problem as it is a really simple site.
Just from a general standpoint, you can get into a bit of trouble later if you have a lot of one-liner functions and multiple lines of code crammed into one and things like that if the goal is merely syntactical sugar and a very personal definition of clarity (this can be quite transient and change like fashion trends).
It's because the quality that gives code longevity is often, above all, familiarity and, to a lesser extent, centralization (less branches of code to jump through). Being able to recognize and not absolutely loathe code you wrote years later (not finding it bizarre/alien, e.g.) often favors those qualities that reduce the number of concepts in the system, and flow down towards very idiomatic use of languages and libraries. There are human metrics here beyond formal SE metrics like just being motivated to keep maintaining the same code.
But it's a balancing act. If the motivation to seek these shorter and sweeter function calls has more to do with concepts beyond syntax like having a central place to modify and extend and instrument the behavior, to improve safety in otherwise error-prone code, etc., then even a bunch of one-liner functions could start to become of great aid in the future. The key in that case to keep the familiarity is to make sure you (and your team if applicable) have plenty of reuse for such functions, and incorporate it into the daily practices and standards.
Idiomatic code tends to be quite safe here because we tend to be saturated by examples of it, keeping it familiar. Any time you start going deep on the end of establishing proprietary interfaces, we risk losing that quality. Yet proprietary interfaces are definitely needed, so the key is to make them count.
Another kind of esoteric view is that functions that depend on each other tend to age together. An image processing function that just operates on very simple types provided by a language tends to age well. We can find, for example, C functions of this sort that are still relevant and easily-applicable today that date back all the way to the 80s. Such code stays familiar. If it depends on a very exotic pixel and color library and math routines outside of the norm, then it tends to age a lot more quickly (loses the familiarity/applicability), because that image processing routine now ages with everything it depends on. So again, always with an eye towards tightrope-balancing and trade-offs, it can sometimes be useful to avoid the temptation to venture too far outside the norms, and avoid coupling your code to too many exotic interfaces (especially ones that serve little more than sugar). Sometimes the slightly-more verbose form of code that favors reducing the number of concepts and more directly uses what is already available in the system can be preferable.
Yet, as is often the case, it depends. But these might be some less frequently-mentioned qualities to keep in mind when making all of your decisions.
If resetInputValues() method returns undefined (meaning returns nothing e.g) or any falsy value, you could refactorize it to:
function togglePanel(type, toHide) {
$('section#' + type + '-panel').toggleClass('hide', toHide || resetInputValues());
}
Use e.g togglePanel('break'); for showBreakPanel() and togglePanel('break', true) for hideBreakPanel().

Is it better to wrap code into an 'IF' statement, or is it better to 'short circuit' the function and return? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm doing some coding in JavaScript, and I am having a lot of instances where I have to check some stuff before I proceed. I got into the habit of returning early in the function, but I'm not sure if I am doing this right. I am not sure if it have an impact on the complexity of my code as it grows.
I want to know from more experienced JavaScript coders, what is a better general practice out of the following two examples. Or is it irrelevant, and they are both OK ways of writing this particular IF block?
1) Returning Early or "Short Circuit" as I call it (Guard Clause).
ServeAlcohol = function(age)
{
if(age < 19)
return;
//...Code here for serving alcohol.....
}
..Or...
2) Wrap code into an IF statement.
ServeAlcohol = function(age)
{
if(age >= 19)
{
//...Code here for serving alcohol.....
}
}
Usually I have input-validation return right away. Imagine if you had a bunch of conditionals, you'd get a mess of nested ifs right away.
Generally once I get past input validation I avoid multiple returns, but for validation I return right away. Keeps it cleaner IMHO.
I prefer the first one, because it's a guard condition and you exit directly. But I don't think there is performance issues with either, just that it's your preference... Both will return execution directly...
Personal choice. For me, if there are some "stop" conditions I can check at the beginning of the method, I prefer using the "return" pattern. But only if I can do them all in the beginning of the method.
FWIW, I'll offer a contrary opinion. Structured Programming suggests that a function should have one point of exit. I think there are some compiler optimizations that are not available if you use early returns, break statements, goto statements and the like. Also more branches in your code means its harder to fill the CPU pipeline resulting in a possible performance reduction... There are also reasons for not returning early that deal with rigorous (i.e. algebreic) reasoning about correctness.
Structured Programming wiki article
It really depends. I do like one point of return for simple functions, but anything longer than 10-20 lines and I'll start breaking things up for the sake of code clarity.
I prefer the first one, because it's the process of elimination, where you return out of the function before the program even has to step through the next round of logic.
I call it my prereq check - where the function won't execute if it doesn't meet the prereq check
In fact, I do this all the time, for example, the classic one is where i have a function that's expecting an integer and i get a string, i check at the top of the function if it's an integer, NOT if it's not a string or not another object/type, that's just stupid in my book.
It's like a college application to Harvard, a prerequisite:
'I don't want to even want you to come for an interview if you don't have a 3.5GPA or higher!'
:=)
The first one is usually preferred simply because it reduces the needed indentation (which could get way out of hand). There is no real performance difference.
There are some people who think that each function should have a single exit point. However, I find it clearer when quick conditional checks like the one you mentioned are done at the beginning. It also avoid some code from being unnecessarily run.
A general rule I've heard is basically fail early and fail often. You never know when a single line of code is pointing to some super-overloaded setter that's working way harder than you might think. If you can prevent that line of code from being executed - say, by returning early - then your code is going to be exponentially more efficient.
In other words, if you can return early and keep code from executing, do it at every turn - especially if you are concerned at all about performance. This might not be as important in something like JS, I suppose - I'm more of an AS3 guy - but the same logic applies.
If you have a lot of cases, it might be best also to trace out the point of failure in each one - in your example, trace out that this returned early because the age was too low. That'll help other developers who go in and attempt to debug your code, they'll know where things fail and why.
Hope that helps! :)
Alternatively, since JavaScript is scheme in disguise
HandleRequestForAlcohol = function(age) {
( IsUnderAge(age) ? RespondUnderAge : ServeAlcohol )();
}
The idiom for selecting the function isn't that important, rather that if you are doing complex validation and then have multiple processes, factor these to separate functions rather than making one big one, unless it's in a very performance critical bit of code.
In my opinion, as a best practice, I think it is more important to consistently use braces with your control blocks, even if their body is only one line.
Consistent
if ( condition ) {
statement;
statement;
}
if ( condition ) {
statement;
}
Not consistent
if ( condition ) {
statement;
statement;
}
if ( condition )
statement;
But even still, this is completely subjective.
As for when to break out of a function, and levels of indentation, that's subjective too. Research and experience have shown that exiting a function at only one point (the end) is easier to debug, optimize, etc. On the other hand, multiple levels of indentation can make a function difficult to read.
If there are multiple / complex guards, I would use the return. Otherwise in the case of one simple condition (in a smallish function) then I prefer using an if.

Using 'return' instead of 'else' in JavaScript

I am working on a project which requires some pretty intricate JavaScript processing. This includes a lot of nested if-elses in quite a few places. I have generally taken care to optimise JavaScript code as much as possible by reading the other tips on Stack Overflow, but I am wondering if the following two constructs would make any difference in terms of speed alone:
if(some_condition) {
// process
return ;
}
// Continue the else condition here
vs
if(some_condition) {
// Process
}
else {
// The 'else' condition...
}
I always go with the first method. Easier to read, and less indentation. As far as execution speed, this will depend on the implementation, but I would expect them both to be identical.
In many languages, is a common practice to invert if statements to reduce nesting or use preconditions.
And having less nesting in your code improves code readability and maintainability.
"Profile, don't speculate!"
you're putting the cart before the horse (maintainability by humans trumps machine speed)
you should drive your optimization efforts by measurements, which means
you should time the execution yourself; it'll obviously differ in different browsers and versions
you should only optimize for speed the hotspots of your application (see point 1)
I will use the first approach when ruling out the invalid situations.
Eg. use first approach when doing some validations, and return if any of the validation fails. There's no point in going further if any of the preconditions fail. The same thing is mentioned by Martin fowler in his Refactoring book. He calls it "Replacing the conditions with Guard clauses". And it can really make the code easy to understand.
Here's a java example.
public void debitAccount(Account account, BigDecimal amount) {
if(account.user == getCurrentUser()) {
if(account.balance > amount) {
account.balance = account.balance - amount
} else {
//return or throw exception
}
} else {
//return or throw exception
}
}
VS
public void debitAccount(Account account, BigDecimal amount) {
if(account.user != getCurrentUser()) return //or error
if(account.balance < amount) return //or error
account.balance = account.balance - amount
}
There won't be any difference in performance I would recommend the second example for maintainability. In general it's good practice to have one and only one possible exit point for a routine. It aids debugging and comprehension.
When there is only a single if..else the performance is almost the same and it doesn't matter. Use whatever is best readable in your case. But coming to nested statements using return is the most performant compared to if...else and case switch
Maybe slightly, but I don't think it will be measurable unless the rest of the function involves "heavy" (and otherwise redundant since I assume that a return would give same result) js calls.
As a side note, I think this is unnecessary micro optimization, and you should probably look elsewhere for performance improvements, ie profile the script through Chrome's developer tools or Firebug for Firefox (or similar tools) and look for slow/long running calls/functions.
While it depends on the JavaScript implementation of the running browser, there should not be any notable difference between them (in terms of speed).
The second form is preferable since breaking the flow is not a good programming habit. Also think about that in assembly, the jump instruction (micro operation) is always evaluated regardless of the evaluation.
Talking from my experiences it depends on the condition you are checking.
if .. return is fine and easy to read if you check some boolean condition (maybe setting) that would make the whole following code unnecessary to be executed at all.
if .. else is much easier to read if you expect some value to be either of two (or more) possible values and you want to execute different code for both cases. Meaning the two possible values represent conditions of equal interpretable value and should therefore be written on the same logical level.
Test it yourself. If this JavaScript is being run in the browser, it will almost certainly depend on the browser's JavaScript parsing engine.
My understanding is that would not make a difference because you branch with the if condition. So, if some_condition is true, the else portion will not be touched, even without the return.
Suppose the return takes 1ms versus the nested if taking 0.1ms (or vice-versa).
It's hard to imagine either one being nearly that slow.
Now, are you doing it more than 100 times per second?
If so, maybe you should care.
In my opinion, return and else is same for the above case but in general if-else and if()return; are very different. You can use return statement when you want to move from the present scope to parent scope while in case of if-else you can check for further if-else in the same scope.

Categories

Resources