Related
I was learning TS after JS and it is said that many features that are not in JS like types and all is achieved using TS. But ultimately TS is compiled down to JS. So how is this possible? How features that are not available in a language is achieved using another one if latter one is finally converted to 1st one?
So as stated in comments, typescript doesn't do anything at runtime. After transpilation all of the type information is gone for good. The point of typescript is just to check your code before-hand and find potential errors, that may cause problem when executing the resulting javascript code. All of it's features that are not part of javascript are erased after the checks are performed. private and public are also erased, the only reason they are there is that in your code you cannot use them. But it's just a "recommendation", you can totally ignore it (because, again, private doesn't exist at runtime). For example you can do something like this:
class Foo {
private bar() {}
}
;(new Foo() as any).bar()
Or even
class Foo {
private bar() {}
}
// #ts-ignore
new Foo().bar()
And here you are, you've successfully tricked the typescript compiler and this code will totally work after transpilation, even though the bar method is private and is not supposed to be accessible.
Speaking about "are all high level languages have type system like typescript?". Well, I'd not say so.
Take C for example. There types also don't exist at runtime per se. When the program is compiled, assembled an executed, everything is just a sequence of ones and zeroes, computer doesn't care if a memory location holds a string or a boolean or whatever, everything is just a sequence of bits. However the types are essential for your program to work, because the type defines, how many bytes in memory are considered a single variable. Like if you define an integer variable, you let the compiler know, that this variable will occupy 4 bytes in memory and that when, for example, adding a value to this variable, it should update all 4 bytes. If you define a char variable, it knows to update only 1 byte.
But these types don't really exist at runtime, you can totally use some pointer shenanigans to trick the compiler into thinking char variable you define is actually an integer. Will it cause problems? A lot. Is it cool? Heck yes. Now every time you add a number to this variable, your program will update 4 bytes instead of one, probably erasing some important information in these 3 bytes that don't belong to your variable, good luck debugging.
With other languages it's a little different, like Java abstracts away this pointers trickery, so you can't trick the compiler as easily. But still, variables of different types generate different byte code after compilation.
So in some languages (like C and Java) types actually impact how your program works (like how many bytes a variable occupies), but in typescript it doesn't. You could always just use any and disable type checking for any piece of your code, if the algorithm is correct, it will still work. Not the case in C though
I'm sorry for this silly question but I'm confusing about that.
I have been reading a you don't know JS yet book and the book said that
JS is most accurately portrayed as a compiled language.
And they explain with some example and that makes sense for me
But when I'm searching on the internet. Most people suppose that JS is an interpreted language.
I read that JS engine uses many kinds of tricks to handling JS programs like JIT, hot-recompile.
So should I consider Javascript to be both compiled and interpreted language?
UPDATE:
When JavaScript first came in 1995-96, Brendan Eich created the very first JS-Engine called spider-monkey (Still used in Mozilla Firefox). At this time JavaScript was created keeping Browsers in mind. So that any file coming from the Servers would be quickly interpreted and shown by the Browsers.
Interpreter was a best choice to do so, since Interpreters
executes code line by line and shows the results immediately.
But as time progressed Performance became an issue, It was becoming slower and slower. The problem with Interpreters is that when you are running the same code over and over again in a loop like this one:
const someCalculation = (num1, num2) => {
return num1 + num2;
};
for (let i = 0; i < 10000; i++) {
someCalculation(5, 6); // 11
}
It can get really really slow.
So the best option was introducing the Compiler,
which actually helps us here. It takes a little bit more time to start up, because it has to go through the compilation step at the beginning - Go through our code, understand it and spit it out into another language. But the Compiler would be smart enough. When it sees the code like above ( where we loop over and it has the same inputs , returning the same outputs), it can actually just simplify this code and instead of calling this function multiple times it can just replace this function with output for the function. Something like this.
const someCalculation = (num1, num2) => {
return num1 + num2;
};
for (let i = 0; i < 10000; i++) {
11; // And it will not call someCalculation again and again.
}
Because the Compiler does not repeat the translation for each pass through in that loop, the code generated from it is actually faster.
And these sorts of edits that Compilers do are called Optimizations
Thus Javascript combined both Interpreter and Compiler to get the best of both the world. So Browsers started mixing compilers called JIT-Compilers for just-in-time compilations to make the engines faster.
In the Image you can see a Profiler which keeps a watch on the repeated code and passes it on to the Compiler for Code Optimizations.
This means that the Execution Speed of Javascript Code that we entered
into the engine is going to
gradually improve because the Profiler and Compiler are constantly
making updates and changes to our Byte code in order to be as
efficient as possible. So Interpreter allows us to run the code right
away while Profiler and Compiler allows us to optimize this code as
we are running.
Now Let's come to some conclusions:
Now that we know how JS-Engine works underneath the hood, we as Programmers can write more Optimized Code - code that the Compiler can take and run it faster than our regular Javascript. However,
We need to make sure we don't confuse this Compiler- because the Compiler is not perfect, it can make mistakes and it may try to optimize the code that exactly does the opposite. And if it makes a mistake and it does something unexpected, it does something called De-Optimization which takes even more longer time to revert it back to the interpreter.
NOW THE BIG QUESTIONS : Is Javascript an interpreted language?
ANSWER : Yes, initially when Javascript first came out, you had a Javascript engine such as spider-monkey- created by Brenden Eich that interpretted javascript to Byte Code and this Javascript engine was able to run inside of our browser to tell our Computers what to do.
But things have evolved now, we don't just have interpreters, we also use compilers to optimize our code. So, this is a common misconception.
When someone says Javascript is an interpreted language, yes there is
some truth to it but it depends on the implementation. You can make an
implementation of Javascript Engine that perhaps only compiles.
Technically it all matters depending on the implementation.
Javascript is initially an interpreted language. When it encounters a bit of code for the first time it reads the tokens one by one and executes them exactly according to specification. This is level 0.
If a bit of code is executed often, let's say 100 times (exact number depends on the browser) it is considered "warm". The browser precomputes the tokenization and basic operations into a slightly faster bytecode. At this stage, no assumptions are made and the bytecode is completely equivalent to the original code. This is level 1.
If code is executed more often, let's say 10,000 times and the types of the parameters are always the same, a further compilation step can be executed. Many javascript operators do wildly different things depending on the types. Every operator has some logic to check what variant of the operator to perform (eg adding or concatenation). Also different amounts of memory need to be allocated to different types of objects. Performing the type checks once at the top of the function and allocating all the memory at once if much faster. This is level 2.
Depending on the browser, there might be more optimization levels, usually by making stricter assumptions about the parameters. There might be a more efficient addition for integers. Of course, if you ever call the function with a different variable type, the browser has to execute unoptimized raw JS again.
Practical Advice
This all happens under the hood and as a programmer you most likely won't ever have to worry about this. The optimization will never change the big O speed of your program, which is usually the cause of most slow software. You might be able to slightly increase the speed by making sure the types of the parameters of your most called functions are consistent, though not enough to be worth the trouble.
Check the MDN
this is the place where you get the most accurate information
To quote MDN on the topic:
JavaScript (JS) is a lightweight, interpreted, or just-in-time compiled programming language
Basically since JS is used in multiple environments it can be either one or the other.
I'm working on react-metaform, and one of my challenges is that I need to allow the end-user to define metadata as functions. Example:
socialSecurityNumber.required: (m) => m.type == 'person'
The problem is obvious: I cannot trust the user. So, these are the precautions i'm planning to make:
User-defined functions should be pure function. In the sense that, these functions can only access their parameter, nothing else.
User-defined functions will run in an environment that is resilient to exceptions, too long execution times and infinite loops. (I'm not worried about this right now).
The question is: How do I make sure a user-defined function only accesses it's parameters and nothing else?
I would use esprima to parse users' JavaScript functions that are stored in files or in a database. And I would allow to run only code that passes the parsing test (only whitelisted features - using local variables, parameters, ...).
You can start with a very simple checking code that only allows very limited scripts and progressively improve it. However, I guess you will put a lot of effort to the solution over time because your users will always want more.
Note: Angular.js uses for its dependency injection this kind of 'trick': https://jsfiddle.net/987Lwezy/
function test() {
console.log("This is my secret!");
}
function parser(f) {
document.body.innerHTML = test.toString();
}
parser(test);
You could use https://github.com/lcrespom/purecheck. It hasn't been updated in a while, but seems to support what you need.
I'm new-ish to JavaScript. I understand many of the concepts of the language, I've been reading up on the prototype inheritance model, and I'm whetting my whistle with more and more interactive front-end stuff. It's an interesting language, but I'm always a bit turned off by the callback spaghetti that is typical of many non-trivial interaction models.
Something that has always seemed strange to me is that in spite of the readability nightmare that is a nest of JavaScript nested callbacks, the one thing that I very rarely see in many examples and tutorials is the use of predefined named functions as callback arguments. I'm a Java programmer by day, and discarding the stereotypical jabs about Enterprise-y names for units of code one of the things I've come to enjoy about working in a language with a strong selection of featureful IDE's is that using meaningful, if long, names can make the intent and meaning of code much clearer without making it more difficult to actually be productive. So why not use the same approach when writing JavaScript code?
Giving it thought, I can come up with arguments that are both for and against this idea, but my naivety and newness to the language impairs me from reaching any conclusions as to why this would be good at a technical level.
Pros:
Flexibility. An asynchronous function with a callback parameter could be reached by one of many different code paths and it could be harried to have to write a named function to account for every single possible edge case.
Speed. It plays heavily in to the hacker mentality. Bolt things on to it until it works.
Everyone else is doing it
Smaller file sizes, even if trivially so, but every bit counts on the web.
Simpler AST? I would assume that anonymous functions are generated at runtime and so the JIT won't muck about with mapping the name to instructions, but I'm just guessing at this point.
Quicker dispatching? Not sure about this one either. Guessing again.
Cons:
It's hideous and unreadable
It adds to the confusion when you're nested nuts deep in a swamp of callbacks (which, to be fair, probably means you're writing poorly constructed code to begin with, but it's quite common).
For someone without a functional background it can be a bizarre concept to grok
With so many modern browsers showing the ability to execute JavaScript code much faster than before, I'm failing to see how any trivial sort of performance gain one might get out using anonymous callbacks would be a necessity. It seems that, if you are in a situation where using a named function is feasible (predictable behavior and path of execution) then there would be no reason not to.
So are there any technical reasons or gotchas that I'm not aware of that makes this practice so commonplace for a reason?
I use anonymous functions for three reasons:
If no name is needed because the function is only ever called in one place, then why add a name to whatever namespace you're in.
Anonymous functions are declared inline and inline functions have advantages in that they can access variables in the parent scopes. Yes, you can put a name on an anonymous function, but that's usually pointless if it's declared inline. So inline has a significant advantage and if you're doing inline, there's little reason to put a name on it.
The code seems more self-contained and readable when handlers are defined right inside the code that's calling them. You can read the code in almost sequential fashion rather than having to go find the function with that name.
I do try to avoid deep nesting of anonymous functions because that can be hairy to understand and read. Usually when that happens, there's a better way to structure the code (sometimes with a loop, sometimes with a data table, etc...) and named functions isn't usually the solution there either.
I guess I'd add that if a callback starts to get more than about 15-20 lines long and it doesn't need direct access to variables in the parent scope, I would be tempted to give it a name and break it out into it's own named function declared elsewhere. There is definitely a readability point here where a non-trivial function that gets long is just more maintainable if it's put in its own named unit. But, most callbacks I end up with are not that long and I find it more readable to keep them inline.
I prefer named functions myself, but for me it comes down to one question:
Will I use this function anywhere else?
If the answer is yes, I name/define it. If not, pass it as an anonymous function.
If you only use it once, it doesn't make sense to crowd the global namespace with it. In today's complex front-ends, the number of named functions that could have been anonymous grows quickly (easily over 1000 on really intricate designs), resulting in (relatively) large performance gains by preferring anonymous functions.
However, code maintainability is also extremely important. Each situation is different. If you're not writing a lot of these functions to begin with, there's no harm in doing it either way. It's really up to your preference.
Another note about names. Getting in the habit of defining long names will really hurt your file size. Take the following example.
Assume both of these functions do the same thing:
function addTimes(time1, time2)
{
// return time1 + time2;
}
function addTwoTimesIn24HourFormat(time1, time2)
{
// return time1 + time2;
}
The second tells you exactly what it does in the name. The first is more ambiguous. However, there are 17 characters of difference in the name. Say the function is called 8 times throughout the code, that's 153 extra bytes your code didn't need to have. Not colossal, but if it's a habit, extrapolating that to 10s or even 100s of functions will easily mean a few KB of difference in the download.
Again however, maintainability needs to be weighed against the benefits of performance. This is the pain of dealing with a scripted language.
A bit late to the party, but some not yet mentioned aspects to functions, anonymous or otherwise...
Anon funcs are not easily referred to in humanoid conversations about code, amongst a team. E.g., "Joe, could you explain what the algorithm does, within that function. ... Which one? The 17th anonymous function within the fooApp function. ... No, not that one! The 17th one!"
Anon funcs are anonymous to the debugger as well. (duh!) Therefore, the debugger stack trace will generally just show a question mark or similar, making it less useful when you have set multiple breakpoints. You hit the breakpoint, but find yourself scrolling the debug window up/down to figure out where the hell you are in your program, because hey, question mark function just doesn't do it!
Concerns about polluting the global namespace are valid, but easily remedied by naming your functions as nodes within your own root object, like "myFooApp.happyFunc = function ( ... ) { ... }; ".
Functions that are available in the global namespace, or as nodes in your root object like above, can be invoked from the debugger directly, during development and debug. E.g., at the console command line, do "myFooApp.happyFunc(42)". This is an extremely powerful ability that does not exist (natively) in compiled programming languages. Try that with an anon func.
Anon funcs can be made more readable by assigning them to a var, and then passing the var as the callback (instead of inlining). E.g.:
var funky = function ( ... ) { ... };
jQuery('#otis').click(funky);
Using the above approach, you could potentially group several anon funcs at the top of the parental func, then below that, the meat of sequential statements becomes much tighter grouped, and easier to read.
Anonymous functions are useful because they help you control which functions are exposed.
More Detail: If there is no name, you can't reassign it or tamper with it anywhere but the exact place it was created. A good rule of thumb is, if you don't need to re-use this function anywhere, it's a good idea to consider if an anonymous function would be better to prevent getting tampered with anywhere.
Example:
If you're working on a big project with a lot of people, what if you have a function inside of a bigger function and you name it something? That means anyone working with you and also editing code in the bigger function can do stuff to that smaller function at any time. What if you named it "add" for instance, and someone reassigned "add" to a number instead inside the same scope? Then the whole thing breaks!
PS -I know this is a very old post, but there is a much simpler answer to this question and I wish someone had put it this way when I was looking for the answer myself as a beginner- I hope you're ok with reviving an old thread!
Its more readable using named functions and they are also capable of self-referencing as in the example below.
(function recursion(iteration){
if (iteration > 0) {
console.log(iteration);
recursion(--iteration);
} else {
console.log('done');
}
})(20);
console.log('recursion defined? ' + (typeof recursion === 'function'));
http://jsfiddle.net/Yq2WD/
This is nice when you want to have an immediately invoked function that references itself but does not add to the global namespace. It's still readable but not polluting. Have your cake and eat it to.
Hi, my name is Jason OR hi, my name is ???? you pick.
Well, just to be clear for the sake of my arguments, the following are all anonymous functions/function expressions in my book:
var x = function(){ alert('hi'); },
indexOfHandyMethods = {
hi: function(){ alert('hi'); },
high: function(){
buyPotatoChips();
playBobMarley();
}
};
someObject.someEventListenerHandlerAssigner( function(e){
if(e.doIt === true){ doStuff(e.someId); }
} );
(function namedButAnon(){ alert('name visible internally only'); })()
Pros:
It can reduce a bit of cruft, particularly in recursive functions (where you could (should actually since arguments.callee is deprecated) still use a named reference per the last example internally), and makes it clear the function only ever fires in this one place.
Code legibility win: in the example of the object literal with anon funcs assigned as methods, it would be silly to add more places to hunt and peck for logic in your code when the whole point of that object literal is to plop some related functionality in the same conveniently referenced spot. When declaring public methods in a constructor, however, I do tend to define labeled functions inline and then assign as references of this.sameFuncName. It lets me use the same methods internally without the 'this.' cruft and makes order of definition a non-concern when they call each other.
Useful for avoiding needless global namespace pollution - internal namespaces, however, shouldn't ever be that broadly filled or handled by multiple teams simultaneously so that argument seems a bit silly to me.
I agree with the inline callbacks when setting short event handlers. It's silly to have to hunt for a 1-5 line function, especially since with JS and function hoisting, the definitions could end up anywhere, not even within the same file. This could happen by accident without breaking anything and no, you don't always have control of that stuff. Events always result in a callback function being fired. There's no reason to add more links to the chain of names you need to scan through just to reverse engineer simple event-handlers in a large codebase and the stack trace concern can be addressed by abstracting event triggers themselves into methods that log useful info when debug mode is on and fire the triggers. I'm actually starting to build entire interfaces this way.
Useful when you WANT the order of function definition to matter. Sometimes you want to be certain a default function is what you think it is until a certain point in the code where it's okay to redefine it. Or you want breakage to be more obvious when dependencies get shuffled.
Cons:
Anon functions can't take advantage of function hoisting. This is a major difference. I tend to take heavy advantage of hoisting to define my own explicitly named funcs and object constructors towards the bottom and get to the object definition and main-loop type stuff right up at the top. I find it makes the code easier to read when you name your vars well and get a broad view of what's going on before ctrl-Fing for details only when they matter to you. Hoisting can also be a huge benefit in heavily event-driven interfaces where imposing a strict order of what's available when can bite you in the butt. Hoisting has its own caveats (like circular reference potential) but it is a very useful tool for organizing and making code legible when used right.
Legibility/Debug. Absolutely they get used way too heavily at times and it can make debug and code legibility a hassle. Codebases that rely heavily on JQ, for instance, can be a serious PITA to read and debug if you don't encapsulate the near-inevitable anon-heavy and massively overloaded args of the $ soup in a sensible way. JQuery's hover method for instance, is a classic example of over-use of anon funcs when you drop two anon funcs into it, since it's easy for a first-timer to assume it's a standard event listener assignment method rather than one method overloaded to assign handlers for one or two events. $(this).hover(onMouseOver, onMouseOut) is a lot more clear than two anon funcs.
The eval function is a powerful and easy way to dynamically generate code, so what are the caveats?
Improper use of eval opens up your
code for injection attacks
Debugging can be more challenging
(no line numbers, etc.)
eval'd code executes slower (no opportunity to compile/cache eval'd code)
Edit: As #Jeff Walden points out in comments, #3 is less true today than it was in 2008. However, while some caching of compiled scripts may happen this will only be limited to scripts that are eval'd repeated with no modification. A more likely scenario is that you are eval'ing scripts that have undergone slight modification each time and as such could not be cached. Let's just say that SOME eval'd code executes more slowly.
eval isn't always evil. There are times where it's perfectly appropriate.
However, eval is currently and historically massively over-used by people who don't know what they're doing. That includes people writing JavaScript tutorials, unfortunately, and in some cases this can indeed have security consequences - or, more often, simple bugs. So the more we can do to throw a question mark over eval, the better. Any time you use eval you need to sanity-check what you're doing, because chances are you could be doing it a better, safer, cleaner way.
To give an all-too-typical example, to set the colour of an element with an id stored in the variable 'potato':
eval('document.' + potato + '.style.color = "red"');
If the authors of the kind of code above had a clue about the basics of how JavaScript objects work, they'd have realised that square brackets can be used instead of literal dot-names, obviating the need for eval:
document[potato].style.color = 'red';
...which is much easier to read as well as less potentially buggy.
(But then, someone who /really/ knew what they were doing would say:
document.getElementById(potato).style.color = 'red';
which is more reliable than the dodgy old trick of accessing DOM elements straight out of the document object.)
I believe it's because it can execute any JavaScript function from a string. Using it makes it easier for people to inject rogue code into the application.
It's generally only an issue if you're passing eval user input.
Two points come to mind:
Security (but as long as you generate the string to be evaluated yourself, this might be a non-issue)
Performance: until the code to be executed is unknown, it cannot be optimized. (about javascript and performance, certainly Steve Yegge's presentation)
Passing user input to eval() is a security risk, but also each invocation of eval() creates a new instance of the JavaScript interpreter. This can be a resource hog.
Mainly, it's a lot harder to maintain and debug. It's like a goto. You can use it, but it makes it harder to find problems and harder on the people who may need to make changes later.
One thing to keep in mind is that you can often use eval() to execute code in an otherwise restricted environment - social networking sites that block specific JavaScript functions can sometimes be fooled by breaking them up in an eval block -
eval('al' + 'er' + 't(\'' + 'hi there!' + '\')');
So if you're looking to run some JavaScript code where it might not otherwise be allowed (Myspace, I'm looking at you...) then eval() can be a useful trick.
However, for all the reasons mentioned above, you shouldn't use it for your own code, where you have complete control - it's just not necessary, and better-off relegated to the 'tricky JavaScript hacks' shelf.
Unless you let eval() a dynamic content (through cgi or input), it is as safe and solid as all other JavaScript in your page.
Along with the rest of the answers, I don't think eval statements can have advanced minimization.
It is a possible security risk, it has a different scope of execution, and is quite inefficient, as it creates an entirely new scripting environment for the execution of the code. See here for some more info: eval.
It is quite useful, though, and used with moderation can add a lot of good functionality.
Unless you are 100% sure that the code being evaluated is from a trusted source (usually your own application) then it's a surefire way of exposing your system to a cross-site scripting attack.
It's not necessarily that bad provided you know what context you're using it in.
If your application is using eval() to create an object from some JSON which has come back from an XMLHttpRequest to your own site, created by your trusted server-side code, it's probably not a problem.
Untrusted client-side JavaScript code can't do that much anyway. Provided the thing you're executing eval() on has come from a reasonable source, you're fine.
It greatly reduces your level of confidence about security.
If you want the user to input some logical functions and evaluate for AND the OR then the JavaScript eval function is perfect. I can accept two strings and eval(uate) string1 === string2, etc.
If you spot the use of eval() in your code, remember the mantra “eval() is evil.”
This
function takes an arbitrary string and executes it as JavaScript code. When the code in
question is known beforehand (not determined at runtime), there’s no reason to use
eval().
If the code is dynamically generated at runtime, there’s often a better way to
achieve the goal without eval().
For example, just using square bracket notation to
access dynamic properties is better and simpler:
// antipattern
var property = "name";
alert(eval("obj." + property));
// preferred
var property = "name";
alert(obj[property]);
Using eval() also has security implications, because you might be executing code (for
example coming from the network) that has been tampered with.
This is a common antipattern when dealing with a JSON response from an Ajax request.
In those cases
it’s better to use the browsers’ built-in methods to parse the JSON response to make
sure it’s safe and valid. For browsers that don’t support JSON.parse() natively, you can
use a library from JSON.org.
It’s also important to remember that passing strings to setInterval(), setTimeout(),
and the Function() constructor is, for the most part, similar to using eval() and therefore
should be avoided.
Behind the scenes, JavaScript still has to evaluate and execute
the string you pass as programming code:
// antipatterns
setTimeout("myFunc()", 1000);
setTimeout("myFunc(1, 2, 3)", 1000);
// preferred
setTimeout(myFunc, 1000);
setTimeout(function () {
myFunc(1, 2, 3);
}, 1000);
Using the new Function() constructor is similar to eval() and should be approached
with care. It could be a powerful construct but is often misused.
If you absolutely must
use eval(), you can consider using new Function() instead.
There is a small potential
benefit because the code evaluated in new Function() will be running in a local function
scope, so any variables defined with var in the code being evaluated will not become
globals automatically.
Another way to prevent automatic globals is to wrap the
eval() call into an immediate function.
EDIT: As Benjie's comment suggests, this no longer seems to be the case in chrome v108, it would seem that chrome can now handle garbage collection of evaled scripts.
VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV
Garbage collection
The browsers garbage collection has no idea if the code that's eval'ed can be removed from memory so it just keeps it stored until the page is reloaded.
Not too bad if your users are only on your page shortly, but it can be a problem for webapp's.
Here's a script to demo the problem
https://jsfiddle.net/CynderRnAsh/qux1osnw/
document.getElementById("evalLeak").onclick = (e) => {
for(let x = 0; x < 100; x++) {
eval(x.toString());
}
};
Something as simple as the above code causes a small amount of memory to be store until the app dies.
This is worse when the evaled script is a giant function, and called on interval.
Besides the possible security issues if you are executing user-submitted code, most of the time there's a better way that doesn't involve re-parsing the code every time it's executed. Anonymous functions or object properties can replace most uses of eval and are much safer and faster.
This may become more of an issue as the next generation of browsers come out with some flavor of a JavaScript compiler. Code executed via Eval may not perform as well as the rest of your JavaScript against these newer browsers. Someone should do some profiling.
This is one of good articles talking about eval and how it is not an evil:
http://www.nczonline.net/blog/2013/06/25/eval-isnt-evil-just-misunderstood/
I’m not saying you should go run out and start using eval()
everywhere. In fact, there are very few good use cases for running
eval() at all. There are definitely concerns with code clarity,
debugability, and certainly performance that should not be overlooked.
But you shouldn’t be afraid to use it when you have a case where
eval() makes sense. Try not using it first, but don’t let anyone scare
you into thinking your code is more fragile or less secure when eval()
is used appropriately.
eval() is very powerful and can be used to execute a JS statement or evaluate an expression. But the question isn't about the uses of eval() but lets just say some how the string you running with eval() is affected by a malicious party. At the end you will be running malicious code. With power comes great responsibility. So use it wisely is you are using it.
This isn't related much to eval() function but this article has pretty good information:
http://blogs.popart.com/2009/07/javascript-injection-attacks/
If you are looking for the basics of eval() look here:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/eval
The JavaScript Engine has a number of performance optimizations that it performs during the compilation phase. Some of these boil down to being able to essentially statically analyze the code as it lexes, and pre-determine where all the variable and function declarations are, so that it takes less effort to resolve identifiers during execution.
But if the Engine finds an eval(..) in the code, it essentially has to assume that all its awareness of identifier location may be invalid, because it cannot know at lexing time exactly what code you may pass to eval(..) to modify the lexical scope, or the contents of the object you may pass to with to create a new lexical scope to be consulted.
In other words, in the pessimistic sense, most of those optimizations it would make are pointless if eval(..) is present, so it simply doesn't perform the optimizations at all.
This explains it all.
Reference :
https://github.com/getify/You-Dont-Know-JS/blob/master/scope%20&%20closures/ch2.md#eval
https://github.com/getify/You-Dont-Know-JS/blob/master/scope%20&%20closures/ch2.md#performance
It's not always a bad idea. Take for example, code generation. I recently wrote a library called Hyperbars which bridges the gap between virtual-dom and handlebars. It does this by parsing a handlebars template and converting it to hyperscript which is subsequently used by virtual-dom. The hyperscript is generated as a string first and before returning it, eval() it to turn it into executable code. I have found eval() in this particular situation the exact opposite of evil.
Basically from
<div>
{{#each names}}
<span>{{this}}</span>
{{/each}}
</div>
To this
(function (state) {
var Runtime = Hyperbars.Runtime;
var context = state;
return h('div', {}, [Runtime.each(context['names'], context, function (context, parent, options) {
return [h('span', {}, [options['#index'], context])]
})])
}.bind({}))
The performance of eval() isn't an issue in a situation like this because you only need to interpret the generated string once and then reuse the executable output many times over.
You can see how the code generation was achieved if you're curious here.
I would go as far as to say that it doesn't really matter if you use eval() in javascript which is run in browsers.*(caveat)
All modern browsers have a developer console where you can execute arbitrary javascript anyway and any semi-smart developer can look at your JS source and put whatever bits of it they need to into the dev console to do what they wish.
*As long as your server endpoints have the correct validation & sanitisation of user supplied values, it should not matter what gets parsed and eval'd in your client side javascript.
If you were to ask if it's suitable to use eval() in PHP however, the answer is NO, unless you whitelist any values which may be passed to your eval statement.
I won't attempt to refute anything said heretofore, but i will offer this use of eval() that (as far as I know) can't be done any other way. There's probably other ways to code this, and probably ways to optimize it, but this is done longhand and without any bells and whistles for clarity sake to illustrate a use of eval that really doesn't have any other alternatives. That is: dynamical (or more accurately) programmically-created object names (as opposed to values).
//Place this in a common/global JS lib:
var NS = function(namespace){
var namespaceParts = String(namespace).split(".");
var namespaceToTest = "";
for(var i = 0; i < namespaceParts.length; i++){
if(i === 0){
namespaceToTest = namespaceParts[i];
}
else{
namespaceToTest = namespaceToTest + "." + namespaceParts[i];
}
if(eval('typeof ' + namespaceToTest) === "undefined"){
eval(namespaceToTest + ' = {}');
}
}
return eval(namespace);
}
//Then, use this in your class definition libs:
NS('Root.Namespace').Class = function(settings){
//Class constructor code here
}
//some generic method:
Root.Namespace.Class.prototype.Method = function(args){
//Code goes here
//this.MyOtherMethod("foo")); // => "foo"
return true;
}
//Then, in your applications, use this to instantiate an instance of your class:
var anInstanceOfClass = new Root.Namespace.Class(settings);
EDIT: by the way, I wouldn't suggest (for all the security reasons pointed out heretofore) that you base you object names on user input. I can't imagine any good reason you'd want to do that though. Still, thought I'd point it out that it wouldn't be a good idea :)