'throw' of exception caught locally. Why is this bad? [duplicate] - javascript

To avoid all standard-answers I could have Googled on, I will provide an example you all can attack at will.
C# and Java (and too many others) have with plenty of types some of ‘overflow’ behaviour I don’t like at all (e.g type.MaxValue + type.SmallestValue == type.MinValue for example : int.MaxValue + 1 == int.MinValue).
But, seen my vicious nature, I’ll add some insult to this injury by expanding this behaviour to, let’s say an Overridden DateTime type. (I know DateTime is sealed in .NET, but for the sake of this example, I’m using a pseudo language that is exactly like C#, except for the fact that DateTime isn’t sealed).
The overridden Add method:
/// <summary>
/// Increments this date with a timespan, but loops when
/// the maximum value for datetime is exceeded.
/// </summary>
/// <param name="ts">The timespan to (try to) add</param>
/// <returns>The Date, incremented with the given timespan.
/// If DateTime.MaxValue is exceeded, the sum wil 'overflow' and
/// continue from DateTime.MinValue.
/// </returns>
public DateTime override Add(TimeSpan ts)
{
try
{
return base.Add(ts);
}
catch (ArgumentOutOfRangeException nb)
{
// calculate how much the MaxValue is exceeded
// regular program flow
TimeSpan saldo = ts - (base.MaxValue - this);
return DateTime.MinValue.Add(saldo)
}
catch(Exception anyOther)
{
// 'real' exception handling.
}
}
Of course an if could solve this just as easy, but the fact remains that I just fail to see why you couldn’t use exceptions (logically that is, I can see that when performance is an issue that in certain cases exceptions should be avoided).
I think in many cases they are more clear than if-structures and don’t break any contract the method is making.
IMHO the “Never use them for regular program flow” reaction everybody seems to have is not that well underbuild as the strength of that reaction can justify.
Or am I mistaken?
I've read other posts, dealing with all kind of special cases, but my point is there's nothing wrong with it if you are both:
Clear
Honour the contract of your method
Shoot me.

Have you ever tried to debug a program raising five exceptions per second in the normal course of operation ?
I have.
The program was quite complex (it was a distributed calculation server), and a slight modification at one side of the program could easily break something in a totally different place.
I wish I could just have launched the program and wait for exceptions to occur, but there were around 200 exceptions during the start-up in the normal course of operations
My point : if you use exceptions for normal situations, how do you locate unusual (ie exceptional) situations ?
Of course, there are other strong reasons not to use exceptions too much, especially performance-wise

Exceptions are basically non-local goto statements with all the consequences of the latter. Using exceptions for flow control violates a principle of least astonishment, make programs hard to read (remember that programs are written for programmers first).
Moreover, this is not what compiler vendors expect. They expect exceptions to be thrown rarely, and they usually let the throw code be quite inefficient. Throwing exceptions is one of the most expensive operations in .NET.
However, some languages (notably Python) use exceptions as flow-control constructs. For example, iterators raise a StopIteration exception if there are no further items. Even standard language constructs (such as for) rely on this.

My rule of thumb is:
If you can do anything to recover from an error, catch exceptions
If the error is a very common one (eg. user tried to log in with the wrong password), use returnvalues
If you can't do anything to recover from an error, leave it uncaught (Or catch it in your main-catcher to do some semi-graceful shutdown of the application)
The problem I see with exceptions is from a purely syntax point of view (I'm pretty sure the perfomance overhead is minimal). I don't like try-blocks all over the place.
Take this example:
try
{
DoSomeMethod(); //Can throw Exception1
DoSomeOtherMethod(); //Can throw Exception1 and Exception2
}
catch(Exception1)
{
//Okay something messed up, but is it SomeMethod or SomeOtherMethod?
}
.. Another example could be when you need to assign something to a handle using a factory, and that factory could throw an exception:
Class1 myInstance;
try
{
myInstance = Class1Factory.Build();
}
catch(SomeException)
{
// Couldn't instantiate class, do something else..
}
myInstance.BestMethodEver(); // Will throw a compile-time error, saying that myInstance is uninitalized, which it potentially is.. :(
Soo, personally, I think you should keep exceptions for rare error-conditions (out of memory etc.) and use returnvalues (valueclasses, structs or enums) to do your error checking instead.
Hope I understood your question correct :)

A first reaction to a lot of answers :
you're writing for the programmers and the principle of least astonishment
Of course! But an if just isnot more clear all the time.
It shouldn't be astonishing eg : divide (1/x) catch (divisionByZero) is more clear than any if to me (at Conrad and others) . The fact this kind of programming isn't expected is purely conventional, and indeed, still relevant. Maybe in my example an if would be clearer.
But DivisionByZero and FileNotFound for that matter are clearer than ifs.
Of course if it's less performant and needed a zillion time per sec, you should of course avoid it, but still i haven't read any good reason to avoid the overal design.
As far as the principle of least astonishment goes : there's a danger of circular reasoning here : suppose a whole community uses a bad design, this design will become expected! Therefore the principle cannot be a grail and should be concidered carefully.
exceptions for normal situations, how do you locate unusual (ie exceptional) situations ?
In many reactions sth. like this shines trough. Just catch them, no? Your method should be clear, well documented, and hounouring it's contract. I don't get that question I must admit.
Debugging on all exceptions : the same, that's just done sometimes because the design not to use exceptions is common. My question was : why is it common in the first place?

Before exceptions, in C, there were setjmp and longjmp that could be used to accomplish a similar unrolling of the stack frame.
Then the same construct was given a name: "Exception". And most of the answers rely on the meaning of this name to argue about its usage, claiming that exceptions are intended to be used in exceptional conditions. That was never the intent in the original longjmp. There were just situations where you needed to break control flow across many stack frames.
Exceptions are slightly more general in that you can use them within the same stack frame too. This raises analogies with goto that I believe are wrong. Gotos are a tightly coupled pair (and so are setjmp and longjmp). Exceptions follow a loosely coupled publish/subscribe that is much cleaner! Therefore using them within the same stack frame is hardly the same thing as using gotos.
The third source of confusion relates to whether they are checked or unchecked exceptions. Of course, unchecked exceptions seem particularly awful to use for control flow and perhaps a lot of other things.
Checked exceptions however are great for control flow, once you get over all the Victorian hangups and live a little.
My favorite usage is a sequence of throw new Success() in a long fragment of code that tries one thing after the other until it finds what it is looking for. Each thing -- each piece of logic -- may have arbritrary nesting so break's are out as also any kind of condition tests. The if-else pattern is brittle. If I edit out an else or mess up the syntax in some other way, then there is a hairy bug.
Using throw new Success() linearizes the code flow. I use locally defined Success classes -- checked of course -- so that if I forget to catch it the code won't compile. And I don't catch another method's Successes.
Sometimes my code checks for one thing after the other and only succeeds if everything is OK. In this case I have a similar linearization using throw new Failure().
Using a separate function messes with the natural level of compartmentalization. So the return solution is not optimal. I prefer to have a page or two of code in one place for cognitive reasons. I don't believe in ultra-finely divided code.
What JVMs or compilers do is less relevant to me unless there is a hotspot. I cannot believe there is any fundamental reason for compilers to not detect locally thrown and caught Exceptions and simply treat them as very efficient gotos at the machine code level.
As far as using them across functions for control flow -- i. e. for common cases rather than exceptional ones -- I cannot see how they would be less efficient than multiple break, condition tests, returns to wade through three stack frames as opposed to just restore the stack pointer.
I personally do not use the pattern across stack frames and I can see how it would require design sophistication to do so elegantly. But used sparingly it should be fine.
Lastly, regarding surprising virgin programmers, it is not a compelling reason. If you gently introduce them to the practice, they will learn to love it. I remember C++ used to surprise and scare the heck out of C programmers.

The standard anwser is that exceptions are not regular and should be used in exceptional cases.
One reason, which is important to me, is that when I read a try-catch control structure in a software I maintain or debug, I try to find out why the original coder used an exception handling instead of an if-else structure. And I expect to find a good answer.
Remember that you write code not only for the computer but also for other coders. There is a semantic associated to an exception handler that you cannot throw away just because the machine doesn't mind.

Josh Bloch deals with this topic extensively in Effective Java. His suggestions are illuminating and should apply to .NET as well (except for the details).
In particular, exceptions should be used for exceptional circumstances. The reasons for this are usability-related, mainly. For a given method to be maximally usable, its input and output conditions should be maximally constrained.
For example, the second method is easier to use than the first:
/**
* Adds two positive numbers.
*
* #param addend1 greater than zero
* #param addend2 greater than zero
* #throws AdditionException if addend1 or addend2 is less than or equal to zero
*/
int addPositiveNumbers(int addend1, int addend2) throws AdditionException{
if( addend1 <= 0 ){
throw new AdditionException("addend1 is <= 0");
}
else if( addend2 <= 0 ){
throw new AdditionException("addend2 is <= 0");
}
return addend1 + addend2;
}
/**
* Adds two positive numbers.
*
* #param addend1 greater than zero
* #param addend2 greater than zero
*/
public int addPositiveNumbers(int addend1, int addend2) {
if( addend1 <= 0 ){
throw new IllegalArgumentException("addend1 is <= 0");
}
else if( addend2 <= 0 ){
throw new IllegalArgumentException("addend2 is <= 0");
}
return addend1 + addend2;
}
In either case, you need to check to make sure that the caller is using your API appropriately. But in the second case, you require it (implicitly). The soft Exceptions will still be thrown if the user didn't read the javadoc, but:
You don't need to document it.
You don't need to test for it (depending upon how aggresive your
unit testing strategy is).
You don't require the caller to handle three use cases.
The ground-level point is that Exceptions should not be used as return codes, largely because you've complicated not only YOUR API, but the caller's API as well.
Doing the right thing comes at a cost, of course. The cost is that everyone needs to understand that they need to read and follow the documentation. Hopefully that is the case anyway.

How about performance? While load testing a .NET web app we topped out at 100 simulated users per web server until we fixed a commonly-occuring exception and that number increased to 500 users.

I think that you can use Exceptions for flow control. There is, however, a flipside of this technique. Creating Exceptions is a costly thing, because they have to create a stack trace. So if you want to use Exceptions more often than for just signalling an exceptional situation you have to make sure that building the stack traces doesn't negatively influence your performance.
The best way to cut down the cost of creating exceptions is to override the fillInStackTrace() method like this:
public Throwable fillInStackTrace() { return this; }
Such an exception will have no stacktraces filled in.

Here are best practices I described in my blog post:
Throw an exception to state an unexpected situation in your software.
Use return values for input validation.
If you know how to deal with exceptions a library throws, catch them at the lowest level possible.
If you have an unexpected exception, discard current operation completely. Don’t pretend you know how to deal with them.

I don't really see how you're controlling program flow in the code you cited. You'll never see another exception besides the ArgumentOutOfRange exception. (So your second catch clause will never be hit). All you're doing is using an extremely costly throw to mimic an if statement.
Also you aren't performing the more sinister of operations where you just throw an exception purely for it to be caught somewhere else to perform flow control. You're actually handling an exceptional case.

Apart from the reasons stated, one reason not to use exceptions for flow control is that it can greatly complicate the debugging process.
For example, when I'm trying to track down a bug in VS I'll typically turn on "break on all exceptions". If you're using exceptions for flow control then I'm going to be breaking in the debugger on a regular basis and will have to keep ignoring these non-exceptional exceptions until I get to the real problem. This is likely to drive someone mad!!

Lets assume you have a method that does some calculations. There are many input parameters it has to validate, then to return a number greater then 0.
Using return values to signal validation error, it's simple: if method returned a number lesser then 0, an error occured. How to tell then which parameter didn't validate?
I remember from my C days a lot of functions returned error codes like this:
-1 - x lesser then MinX
-2 - x greater then MaxX
-3 - y lesser then MinY
etc.
Is it really less readable then throwing and catching an exception?

Because the code is hard to read, you may have troubles debugging it, you will introduce new bugs when fixing bugs after a long time, it is more expensive in terms of resources and time, and it annoys you if you are debugging your code and the debugger halts on the occurence of every exception ;)

If you are using exception handlers for control flow, you are being too general and lazy. As someone else mentioned, you know something happened if you are handling processing in the handler, but what exactly? Essentially you are using the exception for an else statement, if you are using it for control flow.
If you don't know what possible state could occur, then you can use an exception handler for unexpected states, for example when you have to use a third-party library, or you have to catch everything in the UI to show a nice error message and log the exception.
However, if you do know what might go wrong, and you don't put an if statement or something to check for it, then you are just being lazy. Allowing the exception handler to be the catch-all for stuff you know could happen is lazy, and it will come back to haunt you later, because you will be trying to fix a situation in your exception handler based on a possibly false assumption.
If you put logic in your exception handler to determine what exactly happened, then you would be quite stupid for not putting that logic inside the try block.
Exception handlers are the last resort, for when you run out of ideas/ways to stop something from going wrong, or things are beyond your ability to control. Like, the server is down and times out and you can't prevent that exception from being thrown.
Finally, having all the checks done up front shows what you know or expect will occur and makes it explicit. Code should be clear in intent. What would you rather read?

You can use a hammer's claw to turn a screw, just like you can use exceptions for control flow. That doesn't mean it is the intended usage of the feature. The if statement expresses conditions, whose intended usage is controlling flow.
If you are using a feature in an unintended way while choosing to not use the feature designed for that purpose, there will be an associated cost. In this case, clarity and performance suffer for no real added value. What does using exceptions buy you over the widely-accepted if statement?
Said another way: just because you can doesn't mean you should.

As others have mentioned numerously, the principle of least astonishment will forbid that you use exceptions excessively for control flow only purposes. On the other hand, no rule is 100% correct, and there are always those cases where an exception is "just the right tool" - much like goto itself, by the way, which ships in the form of break and continue in languages like Java, which are often the perfect way to jump out of heavily nested loops, which aren't always avoidable.
The following blog post explains a rather complex but also rather interesting use-case for a non-local ControlFlowException:
http://blog.jooq.org/2013/04/28/rare-uses-of-a-controlflowexception
It explains how inside of jOOQ (a SQL abstraction library for Java), such exceptions are occasionally used to abort the SQL rendering process early when some "rare" condition is met.
Examples of such conditions are:
Too many bind values are encountered. Some databases do not support arbitrary numbers of bind values in their SQL statements (SQLite: 999, Ingres 10.1.0: 1024, Sybase ASE 15.5: 2000, SQL Server 2008: 2100). In those cases, jOOQ aborts the SQL rendering phase and re-renders the SQL statement with inlined bind values. Example:
// Pseudo-code attaching a "handler" that will
// abort query rendering once the maximum number
// of bind values was exceeded:
context.attachBindValueCounter();
String sql;
try {
// In most cases, this will succeed:
sql = query.render();
}
catch (ReRenderWithInlinedVariables e) {
sql = query.renderWithInlinedBindValues();
}
If we explicitly extracted the bind values from the query AST to count them every time, we'd waste valuable CPU cycles for those 99.9% of the queries that don't suffer from this problem.
Some logic is available only indirectly via an API that we want to execute only "partially". The UpdatableRecord.store() method generates an INSERT or UPDATE statement, depending on the Record's internal flags. From the "outside", we don't know what kind of logic is contained in store() (e.g. optimistic locking, event listener handling, etc.) so we don't want to repeat that logic when we store several records in a batch statement, where we'd like to have store() only generate the SQL statement, not actually execute it. Example:
// Pseudo-code attaching a "handler" that will
// prevent query execution and throw exceptions
// instead:
context.attachQueryCollector();
// Collect the SQL for every store operation
for (int i = 0; i < records.length; i++) {
try {
records[i].store();
}
// The attached handler will result in this
// exception being thrown rather than actually
// storing records to the database
catch (QueryCollectorException e) {
// The exception is thrown after the rendered
// SQL statement is available
queries.add(e.query());
}
}
If we had externalised the store() logic into "re-usable" API that can be customised to optionally not execute the SQL, we'd be looking into creating a rather hard to maintain, hardly re-usable API.
Conclusion
In essence, our usage of these non-local gotos is just along the lines of what [Mason Wheeler][5] said in his answer:
"I just encountered a situation that I cannot deal with properly at this point, because I don't have enough context to handle it, but the routine that called me (or something further up the call stack) ought to know how to handle it."
Both usages of ControlFlowExceptions were rather easy to implement compared to their alternatives, allowing us to reuse a wide range of logic without refactoring it out of the relevant internals.
But the feeling of this being a bit of a surprise to future maintainers remains. The code feels rather delicate and while it was the right choice in this case, we'd always prefer not to use exceptions for local control flow, where it is easy to avoid using ordinary branching through if - else.

Typically there is nothing wrong, per se, with handling an exception at a low level. An exception IS a valid message that provides a lot of detail for why an operation cannot be performed. And if you can handle it, you ought to.
In general if you know there is a high probability of failure that you can check for... you should do the check... i.e. if(obj != null) obj.method()
In your case, i'm not familiar enough with the C# library to know if date time has an easy way to check whether a timestamp is out of bounds. If it does, just call if(.isvalid(ts))
otherwise your code is basically fine.
So, basically it comes down to whichever way creates cleaner code... if the operation to guard against an expected exception is more complex than just handling the exception; than you have my permission to handle the exception instead of creating complex guards everywhere.

You might be interested in having a look at Common Lisp's condition system which is a sort of generalization of exceptions done right. Because you can unwind the stack or not in a controlled way, you get "restarts" as well, which are extremely handy.
This doesn't have anything much to do with best practices in other languages, but it shows you what can be done with some design thought in (roughly) the direction you are thinking of.
Of course there are still performance considerations if you're bouncing up and down the stack like a yo-yo, but it's a much more general idea than "oh crap, lets bail" kind of approach that most catch/throw exception systems embody.

I don't think there is anything wrong with using Exceptions for flow-control. Exceptions are somewhat similar to continuations and in statically typed languages, Exceptions are more powerful than continuations, so, if you need continuations but your language doesn't have them, you can use Exceptions to implement them.
Well, actually, if you need continuations and your language doesn't have them, you chose the wrong language and you should rather be using a different one. But sometimes you don't have a choice: client-side web programming is the prime example – there's just no way to get around JavaScript.
An example: Microsoft Volta is a project to allow writing web applications in straight-forward .NET, and let the framework take care of figuring out which bits need to run where. One consequence of this is that Volta needs to be able to compile CIL to JavaScript, so that you can run code on the client. However, there is a problem: .NET has multithreading, JavaScript doesn't. So, Volta implements continuations in JavaScript using JavaScript Exceptions, then implements .NET Threads using those continuations. That way, Volta applications that use threads can be compiled to run in an unmodified browser – no Silverlight needed.

But you won't always know what happens in the Method/s that you call. You won't know exactly where the exception was thrown. Without examining the exception object in greater detail....

I feel that there is nothing wrong with your example. On the contrary, it would be a sin to ignore the exception thrown by the called function.
In the JVM, throwing an exception is not that expensive, only creating the exception with new xyzException(...), because the latter involves a stack walk. So if you have some exceptions created in advance, you may throw them many times without costs. Of course, this way you can't pass data along with the exception, but I think that is a bad thing to do anyway.

There are a few general mechanisms via which a language could allow for a method to exit without returning a value and unwind to the next "catch" block:
Have the method examine the stack frame to determine the call site, and use the metadata for the call site to find either information about a try block within the calling method, or the location where the calling method stored the address of its caller; in the latter situation, examine metadata for the caller's caller to determine in the same fashion as the immediate caller, repeating until one finds a try block or the stack is empty. This approach adds very little overhead to the no-exception case (it does preclude some optimizations) but is expensive when an exception occurs.
Have the method return a "hidden" flag which distinguishes a normal return from an exception, and have the caller check that flag and branch to an "exception" routine if it's set. This routine adds 1-2 instructions to the no-exception case, but relatively little overhead when an exception occurs.
Have the caller place exception-handling information or code at a fixed address relative to the stacked return address. For example, with the ARM, instead of using the instruction "BL subroutine", one could use the sequence:
adr lr,next_instr
b subroutine
b handle_exception
next_instr:
To exit normally, the subroutine would simply do bx lr or pop {pc}; in case of an abnormal exit, the subroutine would either subtract 4 from LR before performing the return or use sub lr,#4,pc (depending upon the ARM variation, execution mode, etc.) This approach will malfunction very badly if the caller is not designed to accommodate it.
A language or framework which uses checked exceptions might benefit from having those handled with a mechanism like #2 or #3 above, while unchecked exceptions are handled using #1. Although the implementation of checked exceptions in Java is rather nuisancesome, they would not be a bad concept if there were a means by which a call site could say, essentially, "This method is declared as throwing XX, but I don't expect it ever to do so; if it does, rethrow as an "unchecked" exception. In a framework where checked exceptions were handled in such fashion, they could be an effective means of flow control for things like parsing methods which in some contexts may have a high likelihood of failure, but where failure should return fundamentally different information than success. I'm unaware of any frameworks that use such a pattern, however. Instead, the more common pattern is to use the first approach above (minimal cost for the no-exception case, but high cost when exceptions are thrown) for all exceptions.

One aesthetic reason:
A try always comes with a catch, whereas an if doesn't have to come with an else.
if (PerformCheckSucceeded())
DoSomething();
With try/catch, it becomes much more verbose.
try
{
PerformCheckSucceeded();
DoSomething();
}
catch
{
}
That's 6 lines of code too many.

Related

If you can find property names on strings, why can't you find them on undefined?

You can change the global variable of undefined to a value and you can read a property from a string, but you can't read the property of undefined.
The first two (the ones you can do) seem completely useless and the third is something I think most developers would love to work. If you make an API call and you're trying to handle complex data, why not just evaluate undefined.undefined.undefined.undefined = undefined instead of literally crashing the program? I know about optional chaining but I'm curious why is this even an issue. It seems like an easy fix but I'm guessing there is something idk about.
The point of errors in programming is that they inform you of a serious contract violation or illegal state. You expected something to be defined, but it wasn't. You expected something to be a certain type but it wasn't. Usually, the program shouldn't try to continue under a contract violation, and if it should, you'd want to make that explicit and at least log the violation rather than suppress it completely with head buried in the sand.
In many languages, these sort of invariant violations and illegal operations cause explicit errors, either at runtime or in compilation. The earlier you can catch the problem (thanks to a compiler error, warning or runtime crash/error during testing), the sooner and easier you can fix the bug.
JS has a reputation as stubborn to raise errors due to weak typing and other permissive design choices. You can concatenate and perform math on pretty much any types, or compare values with coercive equality operator == and wind up with anyone's-guess results. The reason is mostly historical: JS was designed for quick, simple scripts, not for massive single-page applications of today.
The reason technology like TypeScript exists is to essentially add errors to JavaScript to make programs safer and support the sort of large, industrial-grade applications that were never envisioned in 1995 when JS was designed. These errors are purely compile-time in the case of TS, but other languages like Python won't let you do many of the things JS does at runtime, like adding two different typed values "abc" + 42 which raises a TypeError (dictionary [a sort of Map] accesses also raise an error; you have to opt-in to the default value equivalent of undefined).
Simarly, JS' strict mode "eliminates some JavaScript silent errors by changing them to throw errors".
The contrary philosophy is to suppress errors, which is exactly what the optional chaining operator ?. offers, and it sounds like you're suggesting the default object property operator . should as well. This is syntactical sugar for code that returns undefined when an object doesn't exist, and it's nice to use occasionally to keep code terse and communicate intent, saying essentially "look, I realize this particular access might be on an undefined object, and my code is designed such that it expects that case and will continue on with the expression evaluating to undefined".
But it's a good thing this isn't the default behavior: it's essentially suppressing an error, pretty much the same rationale behind "abc" * 42 returning the default value NaN instead of throwing a type error in JS. Why would you ever want this? Probably never, hence JS's reputation as unsafe.
As I mentioned in the comments, React also made changes to ascribe to the "fail fast, fail loudly, fail hard" philosophy:
[...] in our experience it is worse to leave corrupted UI in place than to completely remove it. For example, in a product like Messenger leaving the broken UI visible could lead to somebody sending a message to the wrong person. Similarly, it is worse for a payments app to display a wrong amount than to render nothing.
[...] This change means that as you migrate to React 16, you will likely uncover existing crashes in your application that have been unnoticed before.
[...] We also encourage you to use JS error reporting services (or build your own) so that you can learn about unhandled exceptions as they happen in production, and fix them.
Here's advice from an article on javascript.info:
Don't overuse the optional chaining
We should use ?. only where it's ok that something doesn't exist.
For example, if according to our coding logic user object must exist, but address is optional, then we should write user.address?.street, but not user?.address?.street.
So, if user happens to be undefined due to a mistake, we’ll see a programming error about it and fix it. Otherwise, coding errors can be silenced where not appropriate, and become more difficult to debug.
And here's a nice blog post on the matter

In Javascript, is it expensive to use try-catch blocks even if an exception is never thrown?

Is it "slow" to use several try-catch blocks when no exceptions are thrown in any of them? My question is the same as this one, but for JavaScript.
Suppose I have 20 functions which have try-catch blocks in them and another function that calls every one of those 20 functions where none of them throw an exception. Will my code execute slower or perform much worse because of this try-catch blocks?
Are you doing typical CRUD UI code? Use try catches, use loops that go to 10000 for no reason sprinkled in your code, hell, use angular/ember - you will not notice any performance issue.
If you are doing low level library, physics simulations, games, server-side etc then the never throwing try-catch block wouldn't normally matter at all but the problem is that V8 didn't support it in their optimizing compiler until version 6 of the engine, so the entire containing function that syntactically contains a try catch will not be optimized. You can easily work around this though, by creating a helper function like tryCatch:
function tryCatch(fun) {
try {
return fun();
}
catch(e) {
tryCatch.errorObj.e = e;
return tryCatch.errorObj;
}
}
tryCatch.errorObj = {e: null};
var result = tryCatch(someFunctionThatCouldThrow);
if(result === tryCatch.errorObj) {
//The function threw
var e = result.e;
}
else {
//result is the returned value
}
After V8 version 6 (shipped with Node 8.3 and latest Chrome), the performance of code inside try-catch is the same as that of normal code.
The original question asked about the cost of try/catch when an error was not thrown. There is definitely an impact when protecting a block of code with try/catch, but the impact of try/catch will vanish quickly as the code being protected becomes even slightly complex.
Consider this test: http://jsperf.com/try-catch-performance-jls/2
A simple increment runs at 356,800,000 iterations per second
The same increment within a try/catch is 93,500,000 iterations per second. That's on overhead of 75% due to try/catch.
BUT, a trivial function call runs at 112,200,000 iterations per second.
2 trivial function calls run at 61,300,000 iterations per second.
An un-exercised try in this test takes slightly more time than one trivial function call. That's hardly a speed penalty that matters except in the inner-most loop of something really intense like an FFT.
The case you want to avoid is the case where an exception is actually thrown. That is immensely slower, as shown in the above link.
Edit: Those numbers are for Chrome on my machine. In Firefox there is no significant difference between an unexercised try and no protection at all. There's essentially zero penalty to using try/catch if no exception is thrown.
The try-catch block is said to be expensive. However if critical performance is not an issue, using it is not necessarily a concern.
The penalty IMO is:
readability
inappropriate in many cases
ineffective when it comes to async programming
Readability: plumbing your code with plenty of try-catch is ugly and distracting
inappropriate: it's a bad idea to insert such block if your code is not subject to exception-crash. Insert it only if you expect a failure in your code. Take a look at the following topic: When to use try/catch blocks?
Async: the try-catch block is synchronous and is not effective when it comes to async programming. During an ajax request you handle both the error and success events in dedicated callbacks. No need for try-catch.
Hope this helps,
R.
I attempt to provide an answer based on concrete benchmark results. For that I wrote a simple benchmark that compares try-catch to various if-else conditions from simple to more complex. I understand that benchmarks might change a lot depending on the platform. Please comment if you get different results. See the try-catch benchmark here.
First, I try to represent the test suite here in a compact manner. See the link above for full details. There are four test cases, referred later by (index):
(1) try-catch block that calls a function lib.foo with a bit of trigonometric math. No error is ever thrown.
(2) if-else block that checks the existence of the function by 'foo' in lib and then calls the function.
(3) if-else block that checks the existence of the function by typeof lib['foo'] === 'function' and then calls the function.
(4) if-else block that checks the existence of the function by Object.prototype.hasOwnProperty.call(lib, 'foo') and then calls the function.
I ran the benchmark a few times on Chrome 87. Although the actual numbers changed from time to time, the results were consistent and can be roughly summarised as follows:
The try-catch (1) and if-else (2) were almost equivalent in run time. The try-catch (2) was sometimes slower by 1% to 2%.
The if-else (3) was 75% slower than try-catch (1) or if-else (2).
The if-else (4) was 90% slower than try-catch (1) or if-else (2).
To clarify, 75% slower means that if the fastest case took 1.0 secs then the 75% slower execution took 1.75 seconds.
As a conclusion, using try-catch in the case where error is never thrown seems to be as efficient as checking any simple condition. If the condition has anything more complex, try-catch is significantly faster.
As a personal note, the conclusion is in line with what I was taught at university. Although it was in the context of C++ the same lesson seems to apply here. If I remember correctly, my lecturer said that try-block was designed to be very efficient, almost invisible efficiency-wise. However, it was the catch-block that was slow and I mean really slow. If an error was thrown then the handling with catch-block took hundreds or even thousands times longer than what could be achieved with if-else block. Therefore, keep your exceptions exceptional.

Why don't JavaScript libraries use try-catch blocks more often?

I'm still relatively new to JavaScript, coming from a more classical (i.e. Java, also ActionScript 3.0) background. I'm finding that it's common for an incorrect implementation of a library/framework's API to break things further up the call stack, without clear indication that it's application code (not library code) breaking things.
For example, a jQuery.trigger() call may invoke a handler that throws an error, and that invocation is not wrapped in a try-catch (nor implements any other kind of error protection), and prevents all other handlers from firing.
I understand an error should halt execution, but it seems like library code could be better sandboxed from application code, and I see this kind of breakage much more frequently in JS libs than in other languages I've worked with.
Firstly, because it's a pain to catch a single exception:
try {
doSomething();
} catch(e) {
if (e instanceof SomeException) {
// handle SomeException
} else {
throw e; // and lose stacktrace information :-(
}
}
and catching all exceptions is usually a wrong thing.
Secondly, because the exception hierarchy of JavaScript itself is very poor in differentiating different kinds of error and providing informational properties about them. JavaScript prefers to do something that you might have meant and quietly break when it wasn't, than to raise an error (see undefined et al). It is also historically inconsistent between browsers (especially DOM exceptions). This means there is no culture of using exceptions and that inherits into library design.
What bobince said but also, even mentioning try-catch in your function currently means nothing in that function will be optimized. See here, note the JSPerf is not meant to show magnitude, which can be up to 1000x (or whatever) difference between optimized and unoptimized code. You can put more code in the tests and you should see the relative difference grow larger and larger.
See V8 source and SpiderMonkey bug
This is radically different from e.g. Java where merely mentioning try-catch doesn't really affect anything at all in terms of performance, it is when an exception actually happens that you pay (and then it doesn't matter)

Is throwing an exception in javascript/nodejs really that bad?

I'm reading about error handling in nodejs, and I came across something unsettling while reading this document:
http://nodejs.org/api/domain.html
It says "By the very nature of how throw works in JavaScript, there is almost never any way to safely "pick up where you left off", without leaking references, or creating some other sort of undefined brittle state."
This sounds horribly horrible. Is this really saying that any time an exception is thrown, I need to shut down the thread? I feel like I'm missing something here.
There's nothing wrong with throwing exceptions in the right circumstances. It is a useful tool and can be used as such. Exceptions are generally not the right tool for normal, expected, frequently used code paths because they are slow, much slower than normal return values. It's usually better to use return values for those types of circumstances if performance matters to you.
But, exceptions can significantly simplify your code for unexpected error conditions or non-normal conditions and, in a memory managed language like javascript, you generally don't have to worry about memory leaks when throwing an exception except if you are in the middle of manipulating persistent global state when you throw the exception. All local variables and their references are cleaned up for you when an exception is thrown when they go out of scope.
Exceptions don't lead to memory leaks or a brittle state unless your code is poorly written which is the same for any other method of indicating an error condition.

Using 'return' instead of 'else' in JavaScript

I am working on a project which requires some pretty intricate JavaScript processing. This includes a lot of nested if-elses in quite a few places. I have generally taken care to optimise JavaScript code as much as possible by reading the other tips on Stack Overflow, but I am wondering if the following two constructs would make any difference in terms of speed alone:
if(some_condition) {
// process
return ;
}
// Continue the else condition here
vs
if(some_condition) {
// Process
}
else {
// The 'else' condition...
}
I always go with the first method. Easier to read, and less indentation. As far as execution speed, this will depend on the implementation, but I would expect them both to be identical.
In many languages, is a common practice to invert if statements to reduce nesting or use preconditions.
And having less nesting in your code improves code readability and maintainability.
"Profile, don't speculate!"
you're putting the cart before the horse (maintainability by humans trumps machine speed)
you should drive your optimization efforts by measurements, which means
you should time the execution yourself; it'll obviously differ in different browsers and versions
you should only optimize for speed the hotspots of your application (see point 1)
I will use the first approach when ruling out the invalid situations.
Eg. use first approach when doing some validations, and return if any of the validation fails. There's no point in going further if any of the preconditions fail. The same thing is mentioned by Martin fowler in his Refactoring book. He calls it "Replacing the conditions with Guard clauses". And it can really make the code easy to understand.
Here's a java example.
public void debitAccount(Account account, BigDecimal amount) {
if(account.user == getCurrentUser()) {
if(account.balance > amount) {
account.balance = account.balance - amount
} else {
//return or throw exception
}
} else {
//return or throw exception
}
}
VS
public void debitAccount(Account account, BigDecimal amount) {
if(account.user != getCurrentUser()) return //or error
if(account.balance < amount) return //or error
account.balance = account.balance - amount
}
There won't be any difference in performance I would recommend the second example for maintainability. In general it's good practice to have one and only one possible exit point for a routine. It aids debugging and comprehension.
When there is only a single if..else the performance is almost the same and it doesn't matter. Use whatever is best readable in your case. But coming to nested statements using return is the most performant compared to if...else and case switch
Maybe slightly, but I don't think it will be measurable unless the rest of the function involves "heavy" (and otherwise redundant since I assume that a return would give same result) js calls.
As a side note, I think this is unnecessary micro optimization, and you should probably look elsewhere for performance improvements, ie profile the script through Chrome's developer tools or Firebug for Firefox (or similar tools) and look for slow/long running calls/functions.
While it depends on the JavaScript implementation of the running browser, there should not be any notable difference between them (in terms of speed).
The second form is preferable since breaking the flow is not a good programming habit. Also think about that in assembly, the jump instruction (micro operation) is always evaluated regardless of the evaluation.
Talking from my experiences it depends on the condition you are checking.
if .. return is fine and easy to read if you check some boolean condition (maybe setting) that would make the whole following code unnecessary to be executed at all.
if .. else is much easier to read if you expect some value to be either of two (or more) possible values and you want to execute different code for both cases. Meaning the two possible values represent conditions of equal interpretable value and should therefore be written on the same logical level.
Test it yourself. If this JavaScript is being run in the browser, it will almost certainly depend on the browser's JavaScript parsing engine.
My understanding is that would not make a difference because you branch with the if condition. So, if some_condition is true, the else portion will not be touched, even without the return.
Suppose the return takes 1ms versus the nested if taking 0.1ms (or vice-versa).
It's hard to imagine either one being nearly that slow.
Now, are you doing it more than 100 times per second?
If so, maybe you should care.
In my opinion, return and else is same for the above case but in general if-else and if()return; are very different. You can use return statement when you want to move from the present scope to parent scope while in case of if-else you can check for further if-else in the same scope.

Categories

Resources