I have a problem where I need to see if a particular JavaScript source code takes a lot of heap space. Ideally I would like to have access to heap memory usage and data type of objects in the heap. The trouble is that it seems I'll have to execute the code to have access to heap mem allocation information.
The code, however, are malicious (heap spray attacks) so I would like to avoid full execution. Is there a way for me to simulate the execution instead? I've read that I can use sbrk or API hook (MSFT Detours) to get memory usage for a particular process (usually the JS interpreter/engine), but it looks like these use cases actually executed the code.
EDIT:
I would need to access heap memory as part of a pipeline for multiple JS files so it would be ideal having memory info via a command or through an API.
If you use Chrome you can use the Perfomance tab of Developer Tools. Just press record refresh the page or apply JS script:
If you want to see JS memory you can also use Task Manager. -> More Tools -> Task Manager
What does it mean to "simulate execution"?
Generally speaking: JavaScript engines are made to execute JavaScript. For real.
For analyzing malicious code, you'll probably want to look into sandboxing/isolating it as much as possible. In theory, executing it normally in a browser should be enough -- in practice though, security bugs do sometimes exist in browsers, and malicious code will attempt to exploit those, so for this particular purpose that probably won't be enough.
One approach is to add a whole other layer of sandboxing. Find yourself a JavaScript-on-JavaScript interpreter. Or pick a non-JIT-compiling JavaScript engine, and compile it to WebAssembly, and run that from your app. You can then inspect the memory of the WebAssembly instance running the malicious code; this memory is exposed as an ArrayBuffer to your JavaScript app. (I don't have a particular recommendation for such a JS engine, but I'm sure they exist.) It might be a bit of effort to get such a setup going (not sure; haven't tried), but it'd give you perfect isolation from evil code.
Is there some way to prevent certain 'functions' in JavaScript from running on the client. I have concerns that something like Mimikatz that could be run from memory and enable a hacker to compromise a host. Ideally this would detect definable code that is not allowed to run and prevent execution.
There are a few problems with what you're describing. The first is that it's very easy to obfuscate JavaScript. For example, let's say you didn't want the eval function to be executed. It's easy enough to design a regular expression that would remove direct calls to eval. Except what if it's not named eval anymore?
var e = eval;
e('evil');`
You could detect that too, but you end up going down a rabbit hole trying to anticipate obfuscation techniques. If you really want to do this, I suggest starting with a browser plugin like Greasemonkey or Tampermonkey. It's going to be a lot of work and you'll get a lot of false positives.
Another issue is deciding what to block. Depending on implementation, there are only a dozen or so global functions in JavaScript, plus another few dozen global objects. Which will be blocked? None of them are inherently dangerous unless there's a vulnerability in the JavaScript engine or browser. If you're aware a vulnerability, patching is going to be far safer and easier than filtering JavaScript.
The most common way that JavaScript is used in attacks is through cross-site scripting (XSS). The two main approaches are using a malicious script to steal data from the domain, or reformatting the page to prompt the user for sensitive data (usually credentials or payment card numbers). Both techniques use the same JavaScript functions that legitimate pages do, so it's effectively impossible to prevent by analyzing the JavaScript. It is possible to block simple XSS attacks by looking for JavaScript in a request parameter, but browsers already do that.
Your specific example of Mimikatz isn't JavaScript-specific. Mimikatz is a generic tool for post-exploitation. In other words, the attacker must first find a way into your system, then uses Mimikatz to make it easier to stay in and perform mischief. Again, without an initial vulnerability in the JavaScript engine or browser, an attacker won't be able to run something like Mimikatz using JS.
If you're still worried, look at a plugin like NoScript. It supports policies defining which domains can run JavaScript. It's not as granular as you'd like, but it's easy to setup.
I have questions regarding stored JavaScript Procedures. After reading the Blog Entry from PointBeing, I have some questions.
Is there an advantage to storing my code in the DB? I mean functions like lookups for documents, not adding numbers like the example from PointBeing.
Is MongoDB stored javascript faster than node.js javascript?
Is MongoDB stored javascript queries cached and are they any faster?
I'm interested in MongoDB stored javascript performance compared to Node.js Javascript.
Evaluating functions stored in db.system.js ("Stored procedures", when you would like to call them that) is deprecated. The articles on the db.eval shell function and the eval database command have a "Deprecated since version 3.0" warning and the article on server-sided javascript doesn't mention it anymore. So you should avoid using it. One reason is that you can not run a javascript function when you use sharding. So when you build an application which requires eval, you prevent it from scaling in the future. Another is that javascript functions undermine the permission concept. They always need to be run as admin, which makes it impossible to establish a sane permission system. This is especially problematic from a security standpoint considering that server-sided scripts which use user-provided data can potentially be vulnerable to arbitrary script injections.
The advantage of server-sided javascript is that it runs on the database server. This reduces latency between application server and database server when you need to perform a large number of queries. But you can get the same advantage by opening a mongo shell on the database server and executing it there.
The latency advantage is only relevant when you perform multiple queries from your script. When you have only one query, you will still have the latency when invoking the script. So you gain nothing except unnecessary complexity.
There is no additional caching or other optimization for server-sided javascript. Even worse: It will get reparsed and reinterpreted everytime you run it. So it might even be slower than javascript in your application server.
Further, many complex queries which would require script support to implement only with find() can often be expressed using aggregation which will in most cases be far faster than doing the same with find() and javascript because the aggregation framework is implemented in C++ and has access to the raw BSON documents.
The hilarious thing is that blog post ( http://pointbeing.net/weblog/2010/08/getting-started-with-stored-procedures-in-mongodb.html ) was written when JS only took single threaded global lock.
That means there was no con-currency features or more granular lock associated with it (the lock still being a problem and con-currency is only achieved through multiple isolates still). Just because you see it in some random blog post does not mean it should be used.
To answer your questions directly:
Nope. In fact the disadvantage is that the calling user needs full admin rights. This means you give every single privilege to your web user since the inbuilt JS enigne has hooks for everything, including administration functions as such it requires admin rights in order to run.
Calling JS from JS to JS to C++ in JS? No
No, MongoDB caching does not work like that. I recommend you read the fundamentals documentation: http://docs.mongodb.org/manual/faq/fundamentals/
I started to learn JavaScript recently. I've been working in the creation of applications with Node.js and Angular for a few months now.
One of the main aspects that was puzzling me was how it is possible to write asynchronous code in JavaScript in which I do not have to worry about things like thread synchronization, race conditions, etc.
So, I found a couple of interesting articles([1],[2]) that explained how I can be guaranteed that any piece of code that I write will always be executed by a single thread at the time. Bottom line, all my asynchronous code is simply scheduled to be executed at some point within an event loop. This sounds pretty much like the OS scheduler would work in a machine with a single processor, where every process is scheduled to use the processor for a limited amount of time, giving us the fake sense of parallelism. And the callbacks would be like interrupts.
The articles do not provide any particular references, so I thought that the best source on how the JavaScript execution engine work should certainly be the language specification, and so I got me the latest copy of EcmaScript 5.1.
To my great surprise I discovered that this execution behavior is not specified there. How come? This looks like a fundamental design choice done in all JavaScript execution engines in browsers and in node. Interestingly, I have not been able to find a place where this is specified for any specific engine. In fact, I have no clue how people find out this is the way things work to the point that is so categorically affirmed in books and blogs like the ones cited above.
So, I have a set of what I consider interesting questions. I would appreciate any answers providing insights, remarks or simply references pointing me in the right direction to understand the following:
Since the EcmaScript does not specify that the JavaScript execution engine should work with an event loop, how come may implementations of JavaScript seem to work this way, not only in browsers, but also in Node.js?
Does that mean I could implement a new JavaScript engine which is EcmaScript-compatible that in fact provides true multithreading capabilities with features like sychronization locks, conditions, etc?
Does this execution model using an event loop precludes me from taking advantage of multicores if I want to execute an intense CPU-bound task? I mean, I can surely divide the task in chunks (as explained in one of the articles), but this is still executed serially, not in parallel. So, how could a JavaScript engine take advantage of multicores to run my code?
Do you know of any other reputable sources where this behavior for any particular JavaScript engine implementation is formally specified?
How could the code be portable between libraries and engines if we cannot assume a few things about the execution environments?
It looks like too many questions, perhaps making this post too broad to be answered. If it gets closed I will try to ask them in different threads. But they all revolve around the fact that I want to understand better why JavScript and Node were designed with an event loop, and if this is specified somewhere (besides the browsers source code) that I could read and gain a deeper understanding of designs and decisions taken here and more importantly, to know exactly what is the source of information for people writing books and posts about it.
There are certain assumptions/weak references you make which lead you to this conclusion. Some of them are:
ECMAScript ECMA-XXX vs JavaScript vs JavaScriptEngine:
ECMAscript is a language specification, given by ECMA International. JavaScript is the most widely used web language that conforms to ECMAscript. For most part ECMAScript and JavaScript are synonymous (remember there is ActionScript). JavaScriptEngine is the implementation (interpreter) of JavaScript language code. It is a program in flesh and bones worked from ground-up unlike ECMAScript which only describes JavaScript's end goals and behaviour and JavaScript the code that uses the ECMAScript standard. You will find that an engine will do more than just conform to ECMAScript standard. They are at the ends of the specification/implementation spectrum. Example of this is ECMA-262/JavaScript/V8.
Event loop in browser vs Event loop in node.JS (JSEngine vs JSEnvironment):
This looks like a fundamental design choice done in all JavaScript execution engines in browsers and in node.
If you are using node.JS you may have used core libraries fs/net/http. These use event emitters which are hooked with the event loop provided with libuv. This is an extension to the JavaScriptEngine V8, forming node.JS platform. The event loop here involves objects like threads, sockets, files or abstract requests. But the event did not originate here. It was in first used in browsers. A browser implements a DOM which requires events for working with HTML elements. See the DOM specification and one implemented for Mozilla. They use events and require a event loop built on top of the JSEngine for browser use. Chrome adds DOM interface to the V8 engine it embeds.
Yes, you will feel this is common, because of the necessary DOM API in all browsers. Node developers brought forward this novel evented processing to server with the help of libuv which provides non-blocking, asynchronous abstraction for low-level operations required on server. As pointed already, not all server frameworks use event loop. Take example of Rhino which literally uses Java Classes for file,sockets (everything). If you actually use core Java IO, file operations are synchronous.
Now answering your questions in order:
explained in point 2 above
Yes, you can. Take a look at Rhino, there are many others. It may be possible in node but node is geared to be a high performance webserver and that might be against its zen.
Like I said event loop sits on JSEngine. It is a design pattern, that works best with IO. Multi-threaded design works better with high CPU-loads. If you want to use multiple cores in node.JS take a look at cluster module. For browsers you have webworkers
That varies from engine to engine. And how it is embedded. Browsers will have DOM and therefore event loop. Servers can vary. Check their specifications.
For browser it is possible to make it portable between them to a good extent. No promises for server.
Event loop doesn't have anything to do with javascript itself, it's a part of environment, not js engine. Since javascript was designed primarily to manipulate user interface, it was used heavily with event loop. But event loop is a part of UI implementation, not just in javascript, but in any language.
Yes, you can. But it will not be just engine, more like environment/platform. I think (but not quite sure) that you can use threads and related stuff in Rhino.
Yes, it does. In node this is usually solved by spawning more processes and in browser you can use WebWorkers.
I can't imagine a better source then specification. If something isn't there, it's just not a part of javascript (aka EcmaScript)
I have spent a good amount of time today trying to find the answers to my own questions, guided by some of the comments and other answers left for me here. I share my findings here in case others may consider them useful.
Event-Driven Design in JavaScript for Browsers
The decision to design JavaScript this way seems mostly related to the requirements of the DOM Event Architecture. In this specification we can find explicit requirements related to the implementation of events order and the event loop. The HTML5 specification goes even further, and define the terms explicitly and state specific requirements for the event loop implementation.
This must have certainly driven the design of the JavaScript execution engines in browsers. In this article Timing and Synchronization in JavaScript published by Opera we can clearly see that these requirements are the driving force behind the design of the Opera browser. Also in this another article from Mozilla, named Concurrency Model and Event Loop, we can find a clear explanation of the same event-driven design concepts as implemented by Mozilla (although the document seems outdated).
The use of an event loop to deal with this kind of applications is not new.
Handling user input is the most complex aspect of interactive
programming. An application may be sensitive to multiple input
devices, such as mouse and keyboard, and may multiplex these among
multiple input devices (e.g. different windows). Managing this
many-to-many mapping is usually in the province of of User Interface
Management Systems (UIMS) toolkits. Since most UIMS are implemented
in sequential languages they must resort to various techniques to
emulate the necessary concurrency. Typically this toolkits use an
event-loop that monitors the stream of input events and maps the events to call-back functions (or event handlers) provided by the
application programmer.
- Jonh H. Reppy - Concurrent Programming in ML
The use of event loops is present in other famous UI toolkits like Java Swing and Winforms. In Java all UI work must be done within the EventDispatchThread whearas in Winforms all UI work must be done within the thread that created the Window object. So, even when these languages support true multithreading they still require all UI code to be run in a single thread of execution.
Douglas Crockford explains the history of the event loop in JavaScript in this great video called Loopage (worth watching).
Event-Driven Design in JavaScript for Node
Now, the decision of using an event-driven design for Node.js is a bit less evident. Crockford gives a good explanation in the video shared above. But also, in the book, The Past, Present and Future of JavaScript, its author Axel Rauschmayer says:
2009—Node.js, JavaScript on the server. Node.js lets you implement
servers that perform well under load. To do so, it uses event-driven
non-blocking I/O and JavaScript (via V8). Node.js creator Ryan Dahl
mentions the following reasons for choosing JavaScript:
“Because it’s bare and does not come with I/O APIs.” [Node.js can thus introduce its own non-blocking APIs.]
“Web developers use it already.” [JavaScript is a widely known language, especially in a web context.]
“DOM API is event-based. Everyone is already used to running without threads and on an event loop.” [Web developers are not scared of
callbacks.]
So, it looks like Ryan Dahl, creator of Node.js, took into account the current design of JavaScript in browsers to decide which should be the implementation of his non-blocking, event-driven solution for Node.js.
The latest implementation of Node.js seems to use a library called libuv, designed for the implementation of this kind of applications. This library is a core part of the design of node. We can find the definition of event loops in its documentation. Evidently this plays an important role in the current implementation of Node.js.
About Other EcmaScript Compatible Engines
The EcmaScript specification does not provide requirements about how the concurrency needs to be handled in JavaScript. Therefore, this is decided by the implementation of the language. Other models of concurrency could easily be used without making the implementation incompatible with the standard.
The best two examples I found were the new Nashorn JavaScript Engine created for Oracle for the JDK8, and Rhino JavaScript Engine created by Mozilla. They both are EcmaScript compatible, and they both allow the creation of Java classes. Nothing in these engines requires the use of event-driven programming to deal with concurrency. These engines have access to the Java class library and since they run on top of the JVM they probably have access to other concurrency models offered in this platform.
Consider the following example take from JavaScript, The Definitive Guide to illustrate how to use Rhino JavaScript.
print(x); // Global print function prints to the console
version(170); // Tell Rhino we want JS 1.7 language features
load(filename,...); // Load and execute one or more files of JavaScript code
readFile(file); // Read a text file and return its contents as a string
readUrl(url); // Read the textual contents of a URL and return as a string
spawn(f); // Run f() or load and execute file f in a new thread
runCommand(cmd, // Run a system command with zero or more command-line args
[args...]);
quit() // Make Rhino exit
You can see a new thread can be spawned to run a JavaScript file in an independent thread of execution.
About Event-Driven Design, Multicores and True Concurrency
The best explanation I found on this subject comes from the book JavaScript The Definitive Guide. In this book, David Flanagan explains:
One of the fundamental features of client-side JavaScript is that it
is single-threaded: a browser will never run two event handlers at the
same time, and it will never trigger a timer while an event handler is
running, for example. Concurrent updates to application state or to
the document are simply not possible, and client-side programmers do
not need to think about, or even understand, concurrent programming. A
corollary is that client-side JavaScript functions must not run too
long: otherwise they will tie up the event loop and the web browser
will become unresponsive to user input. This is the reason that Ajax
APIs are always asynchronous and the reason that client-side
JavaScript cannot have a simple, synchronous load() or require()
function for loading JavaScript libraries.
The Web Workers specification very carefully relaxes the
single-threaded requirement for client-side JavaScript. The “workers”
it defines are effectively parallel threads of execution. Web workers
live in a self-contained execution environment, however, with no
access to the Window or Document object and can communicate with the
main thread only through asynchronous message passing. This means that
concurrent modifications of the DOM are still not possible, but it
also means that there is now a way to use synchronous APIs and write
long-running functions that do not stall the event loop and hang the
browser. Creating a new worker is not a heavyweight operation like
opening a new browser window, but workers are not flyweight threads
either, and it does not make sense to create new workers to perform
trivial operations. Complex web applications may find it useful to
create tens of workers, but it is unlikely that an application with
hundreds or thousands of workers would be practical.
What About Node.js True Parallelism?
Node.js is a fast-evolving technology, and perhaps that's why it is difficult to find opinions that are up-to-date. But basically, since it follows the same event-driven model as the browsers do, it is impossible to simply program a piece of code and expect it will take advantage of our multiple cores in the server. Since Node.js is implemented using non-blocking technologies, we could assume that every time we do some form of I/O (i.e. read a file, send something through a socket, write to a database, etc.), under the hood, the node engine could be spawning multiple threads and maybe taking advantage of the cores, but our code would still be run serially.
These days, it looks like node.js clustering is the solution for this problem. There are also some libraries like Node Worker that seem to implement the Web Worker concept in node. These libraries basically let us spawn new independent processes within node.js. (Although I have not experimented with this yet).
What About Portability?
It looks like there is no way that, in terms of the concurrency models, we can guarantee that all these libraries will play nice in all environments.
Although in the realm of browsers they all seem to work similarly, and since Node.js runs in an event loop, many things may still work, but there not guarantees that this should work in other engines. I guess this is probably one of the disadvantages of EcmaScript compared to other more extensive specifications like those defining the Java Virtual Machine or the CLR.
Perhaps something gets standardize later. In the future of EcmaScript, more concurrency ideas are being discussed today. See the EcmaSript Wiki: Strawman Proposals Communicating Event-Loop Concurrency and Distribution
I have a calculator widget (jsfiddle) that uses javascript's eval() function to evaluate the user's input to work as a calculator. It's an embedded widget in a chrome extension, so it doesn't have any database or anything else attached that could be hurt, and it doesn't send or receive any data.
Obviously, since it uses javascript's eval function, any javascript can be executed by this box. Is there any risk involved with this? I'm fairly new to javascript so I'm not sure what could result from the user being able to evaluate their own javascript inside this widget. Wouldn't anything they do just be reverted upon refresh?
JavaScript runs on the client side, so your server is not in any imminent danger.
But this could be a problem if users could save their inputs somehow and give a link to other users, as this would allow for the execution of arbitrary JavaScript (ie: Cross-site scripting aka XSS)
All other "eval is evil" and "quality of code" concerns aside...
...the security concern isn't about allowing user-supplied code: the user can delete every file they own if they feel like it. Not recommended, but entirely possible.
The danger with JavaScript, be it eval() or otherwise, is allowing an attacker to run code on the users behalf (without consent), in the context of said user (ergo browser/domain).
This is known as XSS: Cross-Site Scripting:
Cross-site scripting holes are web-application vulnerabilities which allow attackers to bypass client-side security ... by finding ways of injecting malicious scripts into web pages [which may or may not involve eval], an attacker can gain elevated access-privileges to sensitive page-content, session cookies, and a variety of other information maintained by the browser on behalf of the user. Cross-site scripting attacks are therefore a special case of code injection.
Happy coding.
See: "eval is evil" from Efficient JavaScript code:
The 'eval' method, and related constructs such as 'new Function', are extremely wasteful. They effectively require the browser to create an entirely new scripting environment (just like creating a new web page), import all variables from the current scope, execute the script, collect the garbage, and export the variables back into the original environment. Additionally, the code cannot be cached for optimisation purposes. eval and its relatives should be avoided if at all possible.