eval and Obfuscation of JavaScript - javascript

I wish to keep certain JavaScript functions on the server and then send them via JSON as when required.
I will then eval() these functions and use them accordingly.
Now my problem is, If I minify the source file using Clojure or some other Obfuscation library then will the eval'ed function have any chance to conflict with my Obfuscated JS file?
For Reference :
src.js <- Has the eval() call on an JSON property
response.func = "function someAsyncFunc() { //that does nothing as of now ; }";
var zixuy = eval(response.func); //Notice the name of the variable, its a result of Obfuscation
Now, once the function is eval'ed, are there any chances that there won't be any weird things that might happen within that function IF there are any variables within it that depend on the Parent's scope.
If Yes, then is there a workaround/solution for the same?
UPDATE:
I was finally able to drill this basic sense into him after all.

Related

Best practices for creating a "debug mode" variable for my app?

I was about to comment out blocks of code that just printed/console.logged debugging info, and I thought, why don't I create a global scope "debug" variable, and instead of commenting this code out, put an if (DEBUG == 1) {} around it?
The reason I ask is because I'm working with javascript at the moment, and my code is spread across a few .js files. If I create a DEBUG variable in app.js, I'll need to export it from app.js and require it in other files; is this consistent with best practices? Is there a better way to do what I'm thinking of?
There are many ways to do this. Most logging libraries will have levels that allow you to only output or see messages whose levels are above some minimum. Alternatively, if you're just using console.log or console.debug and content to keep those in lieu of more robust log streams, you can change the behavior of these by using your own logging library; for example, if you have a debug.js file that exports your debug() function, import/require it once in each other file and just call debug() instead of console.debug() (or you can actually reassign console.debug = debug but that will have potential side effects in any dependencies or dependent code).
In debug.js, your function can simply check an environment variable (in node.js or similar) or global variable (in the browser) or even a hard-coded flag, and immediately return (doing nothing) if you're in production or not in the mood to print debug messages.
Take a look at bunyan's log levels as an example of how a popular logging library handles this: https://www.npmjs.com/package/bunyan#levels
If you are programming the browser and you want a quick and dirty global variable, you can do window.myVar = 'whatever'.

Creating and minifying JavaScript dynamically in ASP.NET MVC server-side code

I am using a ASP.NET route (to intercept the call to the .js) and controller to generate some JS I want to use on my client. The reason I'm doing this is so as to not have to duplicate id's or constants on the client. Here's the output of my JS:
app.serviceRootURL = 'http://localhost:65211/'; // set in my web.config
app.ajaxResponseStatuses = [
{ "status":"Success", "id":0 }, // set in my C# DTO
{ "status":"Failure", "id":1 },
];
First of all, I am not sure if this is the best approach, so other suggestions to doing this would be beneficial.
More importantly though, I'm wondering how I can bundle and minify this. As I understand it, even if I could minify the JS at compile or run-time, minification will change the names of my variables. So in the above JS, app.ajaxResponseStatuses could get changed to a.bc, and then in the actual JS files where I'm trying to access that variable, they could be looking for x.yz.
Can I minify this code and get it to the server?
Will I still be able to use the above properties in other minified files?
(bonus points) Is this a good aproach to pass server-side-only values to be used on the client?
Part 1
If you are generating the js at runtime, bundling isn't possible (at least not efficiently). You would have to create a new bundle for every request which isn't terribly quick. Plus, you wouldn't be able to cache the regular, constant script bundle.
EDIT: While bundling server-generated js isn't practical, rendering the values into a script tag in the page can achieve the same benefit of bundling, fewer HTTP calls. See the edit in Part 3 for more.
Minifying the server generated js however, is totally possible. This question should have the answer you're looking for. However, I'd recommend you cache this on the server if possible, as the minification process itself could take longer than simply sending down the extra bits.
Part 2
In most minifiers, global variables (those accessible on the window object) are skipped during the name mangling. With the same respect, variables that are accessed in other files that are not defined within that file are not renamed.
For example, if you have the following file...
// outside of a closure, so globally accessible
var foo = 1;
function bar() {
// within a closure, and defined with `var`, not globally accessible
var bar;
// reference to variable declared in another file
baz = null;
}
it would be minified as follows (with whitespace included for readability
var foo = 1;
function bar() {
var b;
baz = null;
}
This is one reason it is important to always declare your variables using the var keyword, otherwise they are assumed to be references to global variables and will not be minified.
Also, JSON (not Javascript object literals!!!) will never be distorted by minifiers, because it consists of string literals for all keys, and all values that aren't of another literal type.
Part 3
Not a bad way, and at my job we do use this approach. For small files though, or simple config values, we have transitioned to rendering server values in a script tag using ASP.NET in the actual view. i.e.
Default.aspx
<script> window.globals = <%= JsonConvert.SerializeObject(new AppGlobals(currentUser)) %>; </script>
We rip this out into a code behind, but the premise is the same.
EDIT:
Server-Generated JS (at it's own uri)
Pros
Cacheable by browser (if fresh values aren't needed on every request)
Cons
Extra round trip
Use when:
Your generated files are large, but rarely change or are the same for multiple users. These scripts can be treated the same as other static assets. To give an example, we serve a js file containing all the text in our app for localization purposes. We can serve a different js file based on the language set in the user's settings, but these values only change once at most with every release, so we can set aggressive cache headers and use a hash in the uri, along with a query string for the locale, to leverage browser caching and download each language file only once per client. Plus, if this file is going to be the same for every user accessing the same uri, you can cache it at the web server (IIS, Apache, etc.).
Ex: /api/language.v1-0-0.js?locale=en
Your js is independent from the rest of your app and not having it won't delay rendering. In this case, you can add the async attribute to your script tag, and this file will be downloaded asynchronously and executed when it is received without preventing the execution of other javascript.
Server-Rendered JS (within the page in a script tag)
Pros
No extra HTTP calls
Cons
Can add extra weight to your HTML, which may not be cacheable or minified depending on your circumstances
Use when:
Your values change often. The weight added to the page should be negligible unless you have a huge number of values (in that case, you might consider splitting them up and adding API endpoints for these values, and only getting them when you need them). With this, you can cut out the extra HTTP call as the js is injected into a script tag on a page the user would already have to retrieve.
But...
Don't waste too much time worrying about it. The differences in these two approaches is almost always negligible. If it becomes a problem, try both and use the better option for your case.

Is eval in Javascript considered safe if not using variable code? [duplicate]

I'm writing some JavaScript code to parse user-entered functions (for spreadsheet-like functionality). Having parsed the formula I could convert it into JavaScript and run eval() on it to yield the result.
However, I've always shied away from using eval() if I can avoid it because it's evil (and, rightly or wrongly, I've always thought it is even more evil in JavaScript, because the code to be evaluated might be changed by the user).
So, when it is OK to use it?
I'd like to take a moment to address the premise of your question - that eval() is "evil". The word "evil", as used by programming language people, usually means "dangerous", or more precisely "able to cause lots of harm with a simple-looking command". So, when is it OK to use something dangerous? When you know what the danger is, and when you're taking the appropriate precautions.
To the point, let's look at the dangers in the use of eval(). There are probably many small hidden dangers just like everything else, but the two big risks - the reason why eval() is considered evil - are performance and code injection.
Performance - eval() runs the interpreter/compiler. If your code is compiled, then this is a big hit, because you need to call a possibly-heavy compiler in the middle of run-time. However, JavaScript is still mostly an interpreted language, which means that calling eval() is not a big performance hit in the general case (but see my specific remarks below).
Code injection - eval() potentially runs a string of code under elevated privileges. For example, a program running as administrator/root would never want to eval() user input, because that input could potentially be "rm -rf /etc/important-file" or worse. Again, JavaScript in a browser doesn't have that problem, because the program is running in the user's own account anyway. Server-side JavaScript could have that problem.
On to your specific case. From what I understand, you're generating the strings yourself, so assuming you're careful not to allow a string like "rm -rf something-important" to be generated, there's no code injection risk (but please remember, it's very very hard to ensure this in the general case). Also, if you're running in the browser then code injection is a pretty minor risk, I believe.
As for performance, you'll have to weight that against ease of coding. It is my opinion that if you're parsing the formula, you might as well compute the result during the parse rather than run another parser (the one inside eval()). But it may be easier to code using eval(), and the performance hit will probably be unnoticeable. It looks like eval() in this case is no more evil than any other function that could possibly save you some time.
eval() isn't evil. Or, if it is, it's evil in the same way that reflection, file/network I/O, threading, and IPC are "evil" in other languages.
If, for your purpose, eval() is faster than manual interpretation, or makes your code simpler, or more clear... then you should use it. If neither, then you shouldn't. Simple as that.
When you trust the source.
In case of JSON, it is more or less hard to tamper with the source, because it comes from a web server you control. As long as the JSON itself contains no data a user has uploaded, there is no major drawback to use eval.
In all other cases I would go great lengths to ensure user supplied data conforms to my rules before feeding it to eval().
Let's get real folks:
Every major browser now has a built-in console which your would-be hacker can use with abundance to invoke any function with any value - why would they bother to use an eval statement - even if they could?
If it takes 0.2 seconds to compile 2000 lines of JavaScript, what is my performance degradation if I eval four lines of JSON?
Even Crockford's explanation for 'eval is evil' is weak.
eval is Evil, The eval function is the most misused feature of
JavaScript. Avoid it
As Crockford himself might say "This kind of statement tends to generate irrational neurosis. Don't buy it."
Understanding eval and knowing when it might be useful is way more important. For example, eval is a sensible tool for evaluating server responses that were generated by your software.
BTW: Prototype.js calls eval directly five times (including in evalJSON() and evalResponse()). jQuery uses it in parseJSON (via Function constructor).
I tend to follow Crockford's advice for eval(), and avoid it altogether. Even ways that appear to require it do not. For example, the setTimeout() allows you to pass a function rather than eval.
setTimeout(function() {
alert('hi');
}, 1000);
Even if it's a trusted source, I don't use it, because the code returned by JSON might be garbled, which could at best do something wonky, at worst, expose something bad.
Bottom Line
If you created or sanitized the code you eval, it is never evil.
Slightly More Detailed
eval is evil if running on the server using input submitted by a client that was not created by the developer or that was not sanitized by the developer.
eval is not evil if running on the client, even if using unsanitized input crafted by the client.
Obviously you should always sanitize the input, as to have some control over what your code consumes.
Reasoning
The client can run any arbitrary code they want to, even if the developer did not code it; This is true not only for what is evaled, but the call to eval itself.
Eval is complementary to compilation which is used in templating the code. By templating I mean that you write a simplified template generator that generates useful template code which increases development speed.
I have written a framework, where developers don't use EVAL, but they use our framework and in turn that framework has to use EVAL to generate templates.
Performance of EVAL can be increased by using the following method; instead of executing the script, you must return a function.
var a = eval("3 + 5");
It should be organized as
var f = eval("(function(a,b) { return a + b; })");
var a = f(3,5);
Caching f will certainly improve the speed.
Also Chrome allows debugging of such functions very easily.
Regarding security, using eval or not will hardly make any difference,
First of all, the browser invokes the entire script in a sandbox.
Any code that is evil in EVAL, is evil in the browser itself. The attacker or anyone can easily inject a script node in DOM and do anything if he/she can eval anything. Not using EVAL will not make any difference.
It is mostly poor server-side security that is harmful. Poor cookies validation or poor ACL implementation on the server causes most attacks.
A recent Java vulnerability, etc. was there in Java's native code. JavaScript was and is designed to run in a sandbox, whereas applets were designed to run outside a sandbox with certificates, etc. that lead to vulnerabilities and many other things.
Writing code for imitating a browser is not difficult. All you have to do is make a HTTP request to the server with your favourite user agent string. All testing tools mock browsers anyway; if an attacker want to harm you, EVAL is their last resort. They have many other ways to deal with your server-side security.
The browser DOM does not have access to files and not a user name. In fact nothing on the machine that eval can give access to.
If your server-side security is solid enough for anyone to attack from anywhere, you should not worry about EVAL. As I mentioned, if EVAL would not exist, attackers have many tools to hack into your server irrespective of your browser's EVAL capability.
Eval is only good for generating some templates to do complex string processing based on something that is not used in advance. For example, I will prefer
"FirstName + ' ' + LastName"
As opposed to
"LastName + ' ' + FirstName"
As my display name, which can come from a database and which is not hardcoded.
When debugging in Chrome (v28.0.1500.72), I found that variables are not bound to closures if they are not used in a nested function that produces the closure. I guess, that's an optimization of the JavaScript engine.
BUT: when eval() is used inside a function that causes a closure, ALL the variables of outer functions are bound to the closure, even if they are not used at all. If someone has the time to test if memory leaks can be produced by that, please leave me a comment below.
Here's my test code:
(function () {
var eval = function (arg) {
};
function evalTest() {
var used = "used";
var unused = "not used";
(function () {
used.toString(); // Variable "unused" is visible in debugger
eval("1");
})();
}
evalTest();
})();
(function () {
var eval = function (arg) {
};
function evalTest() {
var used = "used";
var unused = "not used";
(function () {
used.toString(); // Variable "unused" is NOT visible in debugger
var noval = eval;
noval("1");
})();
}
evalTest();
})();
(function () {
var noval = function (arg) {
};
function evalTest() {
var used = "used";
var unused = "not used";
(function () {
used.toString(); // Variable "unused" is NOT visible in debugger
noval("1");
})();
}
evalTest();
})();
What I like to point out here is, that eval() must not necessarily refer to the native eval() function. It all depends on the name of the function. So when calling the native eval() with an alias name (say var noval = eval; and then in an inner function noval(expression);) then the evaluation of expression may fail when it refers to variables that should be part of the closure, but is actually not.
I saw people advocate to not use eval, because is evil, but I saw the same people use Function and setTimeout dynamically, so they use eval under the hoods :D
BTW, if your sandbox is not sure enough (for example, if you're working on a site that allow code injection) eval is the last of your problems. The basic rule of security is that all input is evil, but in case of JavaScript even JavaScript itself could be evil, because in JavaScript you can overwrite any function and you just can't be sure you're using the real one, so, if a malicious code start before you, you can't trust any JavaScript built-in function :D
Now the epilogue to this post is:
If you REALLY need it (80% of the time eval is NOT needed) and you're sure of what you' re doing, just use eval (or better Function ;) ), closures and OOP cover the 80/90% of the case where eval can be replaced using another kind of logic, the rest is dynamically generated code (for example, if you're writing an interpreter) and as you already said evaluating JSON (here you can use the Crockford safe evaluation ;) )
The only instance when you should be using eval() is when you need to run dynamic JS on the fly. I'm talking about JS that you download asynchronously from the server...
...And 9 times of 10 you could easily avoid doing that by refactoring.
On the server side eval is useful when dealing with external scripts such as sql or influxdb or mongo. Where custom validation at runtime can be made without re-deploying your services.
For example an achievement service with following metadata
{
"568ff113-abcd-f123-84c5-871fe2007cf0": {
"msg_enum": "quest/registration",
"timely": "all_times",
"scope": [
"quest/daily-active"
],
"query": "`SELECT COUNT(point) AS valid from \"${userId}/dump/quest/daily-active\" LIMIT 1`",
"validator": "valid > 0",
"reward_external": "ewallet",
"reward_external_payload": "`{\"token\": \"${token}\", \"userId\": \"${userId}\", \"amountIn\": 1, \"conversionType\": \"quest/registration:silver\", \"exchangeProvider\":\"provider/achievement\",\"exchangeType\":\"payment/quest/registration\"}`"
},
"efdfb506-1234-abcd-9d4a-7d624c564332": {
"msg_enum": "quest/daily-active",
"timely": "daily",
"scope": [
"quest/daily-active"
],
"query": "`SELECT COUNT(point) AS valid from \"${userId}/dump/quest/daily-active\" WHERE time >= '${today}' ${ENV.DAILY_OFFSET} LIMIT 1`",
"validator": "valid > 0",
"reward_external": "ewallet",
"reward_external_payload": "`{\"token\": \"${token}\", \"userId\": \"${userId}\", \"amountIn\": 1, \"conversionType\": \"quest/daily-active:silver\", \"exchangeProvider\":\"provider/achievement\",\"exchangeType\":\"payment/quest/daily-active\"}`"
}
}
Which then allow,
Direct injection of object/values thru literal string in a json, useful for templating texts
Can be use as a comparator, say we make rules how to validate quest or events in CMS
Con of this:
Can be errors in the code and break up things in the service, if not fully tested.
If a hacker can write script on your system, then you are pretty much screwed.
One way to validate your script is keep the hash of your scripts somewhere safe, so you can check them before running.
Eval isn't evil, just misused.
If you created the code going into it or can trust it, it's alright.
People keep talking about how user input doesn't matter with eval. Well sort of~
If there is user input that goes to the server, then comes back to the client, and that code is being used in eval without being sanitized. Congrats, you've opened pandora's box for user data to be sent to whoever.
Depending on where the eval is, many websites use SPAs, and eval could make it easier for the user to access application internals that otherwise wouldn't have been easy. Now they can make a bogus browser extension that can tape into that eval and steal data again.
Just gotta figure what's the point of you using the eval. Generating code isn't really ideal when you could simply make methods to do that sort of thing, use objects, or the like.
Now a nice example of using eval.
Your server is reading the swagger file that you have created. Many of the URL params are created in the format {myParam}. So you'd like to read the URLs and then convert them to template strings without having to do complex replacements because you have many endpoints. So you may do something like this.
Note this is a very simple example.
const params = { id: 5 };
const route = '/api/user/{id}';
route.replace(/{/g, '${params.');
// use eval(route); to do something
eval is rarely the right choice. While there may be numerous instances where you can accomplish what you need to accomplish by concatenating a script together and running it on the fly, you typically have much more powerful and maintainable techniques at your disposal: associative-array notation (obj["prop"] is the same as obj.prop), closures, object-oriented techniques, functional techniques - use them instead.
As far as client script goes, I think the issue of security is a moot point. Everything loaded into the browser is subject to manipulation and should be treated as such. There is zero risk in using an eval() statement when there are much easier ways to execute JavaScript code and/or manipulate objects in the DOM, such as the URL bar in your browser.
javascript:alert("hello");
If someone wants to manipulate their DOM, I say swing away. Security to prevent any type of attack should always be the responsibility of the server application, period.
From a pragmatic standpoint, there's no benefit to using an eval() in a situation where things can be done otherwise. However, there are specific cases where an eval SHOULD be used. When so, it can definitely be done without any risk of blowing up the page.
<html>
<body>
<textarea id="output"></textarea><br/>
<input type="text" id="input" />
<button id="button" onclick="execute()">eval</button>
<script type="text/javascript">
var execute = function(){
var inputEl = document.getElementById('input');
var toEval = inputEl.value;
var outputEl = document.getElementById('output');
var output = "";
try {
output = eval(toEval);
}
catch(err){
for(var key in err){
output += key + ": " + err[key] + "\r\n";
}
}
outputEl.value = output;
}
</script>
<body>
</html>
Since no one has mentioned it yet, let me add that eval is super useful for Webassembly-Javascript interop. While it's certainly ideal to have pre-made scripts included in your page that your WASM code can invoke directly, sometimes it's not practicable and you need to pass in dynamic Javascript from a Webassembly language like C# to really accomplish what you need to do.
It's also safe in this scenario because you have complete control over what gets passed in. Well, I should say, it's no less safe than composing SQL statements using C#, which is to say it needs to be done carefully (properly escaping strings, etc.) whenever user-supplied data is used to generate the script. But with that caveat it has a clear place in interop situations and is far from "evil".
It's okay to use it if you have complete control over the code that's passed to the eval function.
Code generation. I recently wrote a library called Hyperbars which bridges the gap between virtual-dom and handlebars. It does this by parsing a handlebars template and converting it to hyperscript. The hyperscript is generated as a string first and before returning it, eval() it to turn it into executable code. I have found eval() in this particular situation the exact opposite of evil.
Basically from
<div>
{{#each names}}
<span>{{this}}</span>
{{/each}}
</div>
To this
(function (state) {
var Runtime = Hyperbars.Runtime;
var context = state;
return h('div', {}, [Runtime.each(context['names'], context, function (context, parent, options) {
return [h('span', {}, [options['#index'], context])]
})])
}.bind({}))
The performance of eval() isn't an issue in a situation like this too because you only need to interpret the generated string once and then reuse the executable output many times over.
You can see how the code generation was achieved if you're curious here.
There is no reason not to use eval() as long as you can be sure that the source of the code comes from you or the actual user. Even though he can manipulate what gets sent into the eval() function, that's not a security problem, because he is able to manipulate the source code of the web site and could therefore change the JavaScript code itself.
So... when to not use eval()? Eval() should only not be used when there is a chance that a third party could change it. Like intercepting the connection between the client and your server (but if that is a problem use HTTPS). You shouldn't eval() for parsing code that is written by others like in a forum.
If it's really needed eval is not evil. But 99.9% of the uses of eval that I stumble across are not needed (not including setTimeout stuff).
For me the evil is not a performance or even a security issue (well, indirectly it's both). All such unnecessary uses of eval add to a maintenance hell. Refactoring tools are thrown off. Searching for code is hard. Unanticipated effects of those evals are legion.
My example of using eval: import.
How it's usually done.
var components = require('components');
var Button = components.Button;
var ComboBox = components.ComboBox;
var CheckBox = components.CheckBox;
...
// That quickly gets very boring
But with the help of eval and a little helper function it gets a much better look:
var components = require('components');
eval(importable('components', 'Button', 'ComboBox', 'CheckBox', ...));
importable might look like (this version doesn't support importing concrete members).
function importable(path) {
var name;
var pkg = eval(path);
var result = '\n';
for (name in pkg) {
result += 'if (name !== undefined) throw "import error: name already exists";\n'.replace(/name/g, name);
}
for (name in pkg) {
result += 'var name = path.name;\n'.replace(/name/g, name).replace('path', path);
}
return result;
}
I think any cases of eval being justified would be rare. You're more likely to use it thinking that it's justified than you are to use it when it's actually justified.
The security issues are the most well known. But also be aware that JavaScript uses JIT compilation and this works very poorly with eval. Eval is somewhat like a blackbox to the compiler, and JavaScript needs to be able to predict code ahead of time (to some extent) in order to safely and correctly apply performance optimisations and scoping. In some cases, the performance impact can even affect other code outside eval.
If you want to know more:
https://github.com/getify/You-Dont-Know-JS/blob/master/scope%20%26%20closures/ch2.md#eval
Only during testing, if possible. Also note that eval() is much slower than other specialized JSON etc. evaluators.
My belief is that eval is a very powerful function for client-side web applications and safe... As safe as JavaScript, which are not. :-) The security issues are essentially a server-side problem because, now, with tool like Firebug, you can attack any JavaScript application.
When is JavaScript's eval() not evil?
I'm always trying to discourage from using eval. Almost always, a more clean and maintainable solution is available. Eval is not needed even for JSON parsing. Eval adds to maintenance hell. Not without reason, it is frowned upon by masters like Douglas Crockford.
But I found one example where it should be used:
When you need to pass the expression.
For example, I have a function that constructs a general google.maps.ImageMapType object for me, but I need to tell it the recipe, how should it construct the tile URL from the zoom and coord parameters:
my_func({
name: "OSM",
tileURLexpr: '"http://tile.openstreetmap.org/"+b+"/"+a.x+"/"+a.y+".png"',
...
});
function my_func(opts)
{
return new google.maps.ImageMapType({
getTileUrl: function (coord, zoom) {
var b = zoom;
var a = coord;
return eval(opts.tileURLexpr);
},
....
});
}
Eval is useful for code generation when you don't have macros.
For (a stupid) example, if you're writing a Brainfuck compiler, you'll probably want to construct a function that performs the sequence of instructions as a string, and eval it to return a function.
While there may be numerous instances where you can accomplish what you need to accomplish by concatenating a script together and running it on the fly, you typically have much more powerful and maintainable techniques at your disposal. eval is rarely the right choice.: associative-array notation (obj["prop"] is the same as obj.prop), closures, object-oriented techniques, functional techniques - use them instead.
When you parse a JSON structure with a parse function (for example, jQuery.parseJSON), it expects a perfect structure of the JSON file (each property name is in double quotes). However, JavaScript is more flexible. Therefore, you can use eval() to avoid it.

Can I sandbox 3rd party JavaScript to protect the global namespace?

At work I have to deal with a lot of (sometimes awful) JavaScript that vendors expect us to drop into our websites. Some of them add arbitrary, un-namespaced, names to the global scope. I've been toying with different ideas about sandboxing these programs so they don't have direct access to the global scope, but I haven't been able to think of anything that might actually work. (I thought of evalling the fetched code, but I'm convinced that's a bad idea.)
What solutions are there to keep the global scope clean while still using arbitrary 3rd party JavaScript?
Edit: #kirilloid's comment tells me I haven't been clear about how this code is delivered. I'm not usually given code to put into our website. Most of the time, I'm just given a URL and asked to create a script tag that points to it.
Your options are pretty limited. If the code is presented to you in the form of a <script> tag you can't modify, you can stop trying now.
If you can modify the script, you can wrap the code in a closure; this will stop any variables they declare with var being published in the global scope (but you'll still get problems with implicit globals).
(function () {
var aGlobalVariableThatUsedToCauseACollision = 4; // no longer collides!
anotherGlobalVariableThatUsedToCauseACollision = 4; // Mmmmm.
}());
Probably an infeasible option; you could use "use strict" which prevents them using implicit globals, but if the code is as awful as it seems, this won't help you.
An additional change (and prehaps the best) you could make could be to wrap your own code in a closure, and keep local copies of the important window properties (undefined) etc in there; this way you prevent other scripts affecting yours (this is what libraries such as jQuery do).
(function (jQuery, $, undefined) {
alert(undefined == 4); // false!
}(jQuery, $));
undefined = 4;

Node.js and client sharing the same scripts

One of the theoretical benefits from working with Node.js is the possibility to share the same scripts between clients and the server. That would make it possible to degrade the same functionality to the server if the client does not support javascript.
However, the Node.js require() method works on it's own. In the script you load, you can add stuff to this or exports that will later be available in the object that fetched the script:
var stuff = require('stuff');
stuff.show();
In stuff.js:
this.show = function() {
return 'here is my stuff';
}
So, when re-using this script on the client, the .show() method will be added to the window scope. That is not what we want here, instead we would like to add it to a custom namespace.
My only solution so far is something like (in stuff.js):
var ns = typeof exports == 'undefined' ? (function() {
return window['stuff'] = {};
})() : exports;
ns.show = function() {
return 'here is my stuff';
}
delete ns; // remove ns from the global scope
This works quite well since I can call stuff.show() on the server and client. But it looks quirky. I tried searching for solutions but node.js is still very new (even to me) so there are few reliable resources. Does anyone have a better idea on how to solve this?
In short, if you want to re-use scripts don't use Node.js specific stuff you have to go with the lowest common denominator here, the Browser.
Solutions are:
Go overkill and use RequireJS, this will make it work in both Node.js and the Browser. But you need to use the RequireJS format on the server side and you also need to plug in an on the fly converted script...
Do your own loader
Wrap your re-use scripts both on the server and client side with an anonymous function
Now create some code that users call(module) on that function, on the Node side you pass in this for the module, on the client side you pass in a name space object
Keep it simple and stupid, as it is now, and don't use this in the module scope on the Node.js side of things
I wish I could give you a simple out of the box solution, but both environments differ to much in this case. If you really have huge amounts of code, you might consider a build script which generates the files.

Categories

Resources