I'm wondering how one would go about sandboxing user javascript and exposing interfaces without allowing modification of those interfaces? Specifically in a nodejs env. Example:
//public class you can interface (should be immutable)
function InterfaceClass () {
this.x = 0;
thix.y = 0;
}
//executing users code (in a sandbox of some sort)
function userCode () {
//disallow this:
InterfaceClass = function () {
};
//allow this:
var interface = new Interface();
interface.x = 1;
}
The only part of a sandbox that is straightforward to implement is the protection of your interfaces and your own custom Javascript functions.
You can create a situation where there are not any globals of your own that can be modified and the only variables that the user code receives from the outside world are copies.
To do this, put the user code inside a function of your creation (similar to how a node module is loaded) and then pass copies of your API to the user code as arguments to that master function you wrap the user code in (probably passing it an object with properties on the object). Then, all the user code can do is modify the copies, not modify any of the originals so it won't affect any other code.
Using your example:
// interfaces created inside some private scope
(function() {
//public class you can interface (should be immutable)
function InterfaceClass () {
this.x = 0;
thix.y = 0;
}
var api = {Interface: InterfaceClass};
launchUsercode(api);
})();
// user code is wrapped in your own function creating a private scope
function launchUsercode(api) {
//executing users code (in a sandbox of some sort)
function userCode () {
//allow this:
var interface = new api.Interface();
interface.x = 1;
// mucking with api.Interface does not do anything other than
// mess up their own environment
}
userCode();
};
FYI, the only thing this protects is the redefinition of your own functions. This user code is free to do anything any node.js application could do, start up servers, read/write to the file system, shut down the process, fire up child processes, etc... This is not even close to generally secure. That is a much, much harder problem that probably needs full-on firewalled VMs with their own file system and separate processes and lots of process management to solve. That is not an easy task at all.
Related
The question is related to general js programming, but I'll use nightwatch.js as an example to elaborate my query.
NightWatch JS provides various chaining methods for its browser components, like: -
browser
.setValue('input[name='email']','example#mail.com')
.setValue('input[name='password']', '123456')
.click('#submitButton')
But if I'm writing method to select an option from dropdown, it requires multiple steps, and if there are multiple dropdowns in a form, it gets really confusing, like: -
browser
.click(`#country`)
.waitForElementVisible(`#india`)
.click(`#india`)
.click(`#state`)
.waitForElementVisible(`#delhi`)
.click(`#delhi`)
Is it possible to create a custom chaining method to group these already defined methods? For example something like:
/* custom method */
const dropdownSelector = (id, value) {
return this
.click(`${id}`).
.waitForElementVisible(`${value}`)
.click(`${value}`)
}
/* So it can be used as a chaining method */
browser
.dropdownSelector('country', 'india')
.dropdownSelector('state', 'delhi')
Or is there any other way I can solve my problem of increasing reusability and readability of my code?
I'm somewhat new to JS so couldn't tell you an ideal code solution, would have to admit I don't know what a proxy is in this context. But in the world of Nightwatch and test-automation i'd normally wrap multiple steps I plan on reusing into a page object. Create a new file in a pageObject folder and fill it with the method you want to reuse
So your test...
browser
.click(`#country`)
.waitForElementVisible(`#india`)
.click(`#india`)
.click(`#state`)
.waitForElementVisible(`#delhi`)
.click(`#delhi`)
becomes a page object method in another file called 'myObject' like...
selectLocation(browser, country, state, city) {
browser
.click(`#country`) <== assume this never changes?
.waitForElementVisible(country)
.click(country)
.click(state)
.waitForElementVisible(city)
.click(city);
}
and then each of your tests inherit the method and define those values themselves, however you chose to manage that...
const myObject = require ('<path to the new pageObject file>')
module.exports = {
'someTest': function (browser) {
const country = 'something'
const state = 'something'
const city = 'something'
myObject.selectLocation(browser);
You can also set your country / state / city as variables in a globals file and set them as same for everything but I don't know how granular you want to be.
Hope that made some sense :)
This is a great place to use a Proxy. Given some class:
function Apple ()
{
this.eat = function ()
{
console.log("I was eaten!");
return this;
}
this.nomnom = function ()
{
console.log("Nom nom!");
return this;
}
}
And a set of "extension methods":
const appleExtensions =
{
eatAndNomnom ()
{
this.eat().nomnom();
return this;
}
}
We can create function which returns a Proxy to select which properties are retrieved from the extension object and which are retrieved from the originating object:
function makeExtendedTarget(target, extensions)
{
return new Proxy(target,
{
get (obj, prop)
{
if (prop in extensions)
{
return extensions[prop];
}
return obj[prop];
}
});
}
And we can use it like so:
let apple = makeExtendedTarget(new Apple(), appleExtensions);
apple
.eatAndNomnom()
.eat();
// => "I was eaten!"
// "Nom nom!"
// "I was eaten!"
Of course, this requires you to call makeExtendedTarget whenever you want to create a new Apple. However, I would consider this a plus, as it makes it abundantly clear you are created an extended object, and to expect to be able to call methods not normally available on the class API.
Of course, whether or not you should be doing this is an entirely different discussion!
I'm relatively new to JavaScript so apologies if this type of question is an obvious one.
We have an app which uses etcd as its way to store data. What I'm trying to do is implement a way of swapping or alternating between different backend data stores (I'm wanting to use dynamodb).
I come from a C# background so if I was to implement this behaviour in an asp.net app I would use interfaces and dependancy injection.
The best solution I can think of is to have a factory which returns a data store object based upon some configuration setting. I know that TypeScript has interfaces but would prefer to stick to vanilla js if possible.
Any help would be appreciated. Thanks.
Interfaces are "merely" a static typing measure to implement polymorphism. Since Javascript doesn't have any static type system, it also doesn't have interfaces. But, it's a highly polymorphic language in itself. So what you want to do is trivial; simply don't write any interfaces as part of the process:
function StorageBackend1() { }
StorageBackend1.prototype.store = function (data) {
// here be dragons
};
function StorageBackend2() { }
StorageBackend2.prototype.store = function (data) {
// here be other dragons
};
function SomeModel(storage) {
this.storage = storage;
this.data = {};
}
SomeModel.prototype.saveData = function () {
this.storage.store(this.data);
};
var m1 = new SomeModel(new StorageBackend1),
m2 = new SomeModel(new StorageBackend2);
m1.saveData();
m2.saveData();
Using TypeScript and actual interfaces gives you the sanity of a statically type checked language with fewer possible surprises at runtime, but you don't need it for polymorphism.
I come from Delphi / C# etc. And interface are just a pain in the but.
Javascript is so much nicer..
With javascript interfaces are not needed, just add the method.
eg.
function MyBackend1() {
this.ver = 'myBackEnd1';
}
function MyBackend2() {
this.ver = 'myBackEnd2';
this.somefunc = function () { console.log('something'); }
}
function run(backend) {
console.log(backend.ver);
//below is like an interface supports
if (backend.somefunc) backend.somefunc();
}
run(new MyBackend2());
//lets now use backend1
run(new MyBackend1());
If I want to span my JavaScript project across multiple source files, but have each file have access to the same private variable, how would one do that?
For example, if I have the following code:
APP = (function () {
var _secret = {},
app = {};
// Application part 01:
app.part01 = (function () { /* function that uses _secret */ }());
// Application part 02:
app.part02 = (function () { /* function that uses _secret */ }());
//
return app;
}());
How do I put app.part01 and app.part02 in seperate files, but still have access to _secret?
I don't want to pass it as an argument. That's just giving the secret away, as app.part01() could be replaced by any other function.
Maybe I am asking the impossible, but your suggestions might lead me in the right way.
I want to work with multiple files, but I don't know how. Copying and pasting everything inside a single function each time before testing is not something I want to do.
How do I put app.part01 and app.part02 in seperate files, but still have access to _secret?
That's impossible indeed. Script files are executed in the global scope, and don't have any special privileges. All variables that they will be able to access are just as accessible to all other scripts.
Copying and pasting everything inside a single function each time before testing is not something I want to do
What you are looking for is an automated build script. You will be able to configure it so that it bundles your files together, and wraps them in an IEFE in whose scope they will be able to share their private state. The most simple example:
#!/bin/sh
echo "APP = (function () {
var _secret = {},
app = {};" > app.js
cat app.part01.js >> app.js
cat app.part02.js >> app.js
echo " return app;
}());" >> app.js
The only way that you can share _secret is attaching it to the application object and then application object to the window object. Here is an example.
// FIRST JS FILE...
var application; // will be attached to window
(function(app) {
app.secret = "blah!"; // will be attached to application
})(application || (application = {}));
// ANOTHER JS FILE
var application;
(function(app) {
app.method1 = function(){ console.log(app.secret); }; // will be attached to application;
})(application || (application = {}));
console.log(application.method1()); // will display 'blah!' on the console
Working example on jsbin
One way I was able to accomplish this was to create a JS file that contained the global object.
// Define a global object to contain all environment and security variables
var envGlobalObj = {
appDatabase: process.env.YCAPPDATABASEURL,
sessionDatabase: process.env.YCSESSIONDATABASEURL,
secretPhrase: process.env.YCSECRETPHRASE,
appEmailAddress: process.env.YCAPPEMAILADDRESS,
appEmailPassword: process.env.YCAPPEMAILPASSWORD
}
module.exports = envGlobalObj
Then in the files I wish to reference this object, I added a require statement.
var envGlobalObj = require("./envGlobalObj.js");
This allowed me to centralize the environment and secrect variables.
I'm creating a plugin system using the following:
function Plugin(thingy, code)
{
var GLOBAL = null;
var arguments = null;
var process = null;
var require = null;
eval(code);
};
plugins.push(new Plugin(thingy, code));
Please don't get too excited about the eval(), using ('vm') or a sandbox is not an option as this will be a long running object until the user unloads it. It will also be running in it's own nodeJS instance so they can't affect other users. I'd still have the same problem passing in this object reference to a sandbox system anyway.
What I am concerned about is someone seeing the code of the thingy object that has functions they need to use e.g shoot()
console.log(thingy.shoot.toString());
A way around this was the following:
function thingy()
{
// They can't see this
var _shoot = function(someone)
{
// Load weapon
// Aim
// Fire
};
// They can see this
this.shoot = function(someone)
{
_shoot(someone);
};
};
This way if they console.log(thingy.shoot.toString()) they'll only see _shoot(someone); and not the actual code that handles the shooting process.
Please could someone help me with the following:
Is there an easier way to limit access to a passed in variables code?
I'm setting GLOBAL, arguments, process and require to null; are there others I need to worry about?
Probably many of you tried to achieve encapsulation in JavaScript. The two methods known to me are:
a bit more common I guess:
var myClass(){
var prv //and all private stuff here
//and we don't use protoype, everything is created inside scope
return {publicFunc:sth};
}
and second one:
var myClass2(){
var prv={private stuff here}
Object.defineProperty(this,'prv',{value:prv})
return {publicFunc:this.someFunc.bind(this)};
}
myClass2.prototype={
get prv(){throw 'class must be created using new keyword'},
someFunc:function(){
console.log(this.prv);
}
}
Object.freeze(myClass)
Object.freeze(myClass.prototype)
So, as second option is WAY more convenient to me (specifically in my case as it visually separates construction from workflow) the question is - are there any serious disadvantages / leaks in this case? I know it allows external code to access arguments of someFunc by
myClass.protoype.someFunc.arguments
but only in case of sloppily executed callbacks (synchronously inside caller chain). Calling them with setTimeout(cb,0) breaks chain and disallows to get arguments as well as just returning value synchronously. At least as far as i know.
Did I miss anything? It's kind of important as code will be used by external, untrusted user provided code.
I like to wrap my prototypes in a module which returns the object, this way you can use the module's scope for any private variables, protecting consumers of your object from accidentally messing with your private properties.
var MyObject = (function (dependency) {
// private (static) variables
var priv1, priv2;
// constructor
var module = function () {
// ...
};
// public interfaces
module.prototype.publicInterface1 = function () {
};
module.prototype.publicInterface2 = function () {
};
// return the object definition
return module;
})(dependency);
Then in some other file you can use it like normal:
obj = new MyObject();
Any more 'protecting' of your object is a little overkill for JavaScript imo. If someone wants to extend your object then they probably know what they're doing and you should let them!
As redbmk points out if you need private instance variables you could use a map with some unique identifier of the object as the key.
So, as second option is WAY more convenient to me (specifically in my case as it visually separates construction from workflow) the question is - are there any serious disadvantages / leaks in this case?
Hm, it doesn't really use the prototype. There's no reason to "encapsulate" anything here, as the prototype methods will only be able to use public properties - just like your untrusted code can access them. A simple
function myClass2(){
var prv = // private stuff here
Object.defineProperty(this, 'prv', {value:prv})
// optionally bind the publicFunc if you need to
}
myClass2.prototype.publicFunc = function(){
console.log(this.prv);
};
should suffice. Or you use the factory pattern, without any prototypes:
function myClass2(){
var prv = // private stuff here
return {
prv: prv,
publicFunc: function(){
console.log(this.prv); // or even just `prv`?
}
};
}
I know it allows external code to access arguments of someFunc by
myClass.protoype.someFunc.arguments
Simply use strict mode, this "feature" is disallowed there.
It's kind of important as code will be used by external, untrusted user provided code.
They will always get your secrets if the code is running in the same environment. Always. You might want to try WebWorkers instead, but notice that they're still CORS-privileged.
To enforcing encapsulation in a language that doesn't properly support private, protected and public class members I say "Meh."
I like the cleanliness of the Foo.prototype = { ... }; syntax. Making methods public also allows you to unit test all the methods in your "class". On top of that, I just simply don't trust JavaScript from a security standpoint. Always have security measures on the server protecting your system.
Go for "ease of programming and testing" and "cleanliness of code." Make it easy to write and maintain, so whichever you feel is easier to write and maintain is the answer.