Dynamic vs Static Compiler (JavaScript) - javascript

I'm currently writing a JavaScript compiler in ANTLR+Java.
I've read questions here on Stack Overflow on how to proceed with the execution - and the answer is always that it would be way too hard to do a static compilation (without JIT-information) of a dynamic language - but why is that exactly? There are of course the obvious "type resolving" problem and in JavaScript maybe a problem with the eval function - but are there other reasons? (because they don't seem too hard to overcome pure statically (no JITS))
I'm excluding JIT-based compilation because I figure it would be too hard for me to implement.
I have some experience in writing static compilers with a byte-code execution.
UPDATE:
All your answers are really helpfull understanding the problem.
To clarify does this mean that JavaScript is harder to implement than other dynamic languages?
And does this also means that im better of using a Tree-based interpreter than e.g. Byte-code (if we forget about the property that JS always is shipped in raw source code - hence adding extra time for generating and IR and afterwards execute it)? - or should they be about equally easy / hard to do?
(Im new to SOF; dont know if this is the preferred way to update a question?)

There are lots of ways this conversation could go. Here's one direction. In javascript, nearly everything is an object and properties or methods can be added to any object at run-time. As such, you don't know at compile time what methods or properties will or won't be attached to an object. As such, everything has to be looked up at run-time.
For example:
var myObj = {};
function configureObject() {
if (something in the environment) {
myObj.myfunc = function () {alert("Hi");}
} else {
myObj.myfunc = function () {document.write("Hello");}
}
}
Now, sometime later in the code you call myObj.myfunc(); It is not known at compile time what myfunc is or whether it's even an attribute of myObj. It has to be a run-time lookup.
In another example, take this line of code:
var c = a + b;
What his means depends entirely upon the types of a and b and those types are not known at compile time.
If a and b are both numbers, then this is an addition statement and c will be a number.
If either a or b is a string, then the other will be coerced to a string and c will be a string.
You can't precompile this kind of logic into native code. The execution environment has to record that this is a request for the addition operator between these two operands and it has to (at runtime) examine the types of the two operands and decide what to do.

The challenge with writing a static JavaScript compiler is that it is in general undecidably hard to determine what objects are being referenced at any program point or what functions are being called. I could use the fact that JavaScript is dynamic to decide which function to call based on the output of some Turing machine. For example:
var functionName = RunTuringMachineAndReportOutputOnTape(myTM, myInput);
eval(functionName + "();");
At this point, unless you have advance knowledge about what myTM and myInput are, it is provably impossible to decide what function will be invoked by the call to eval, since it's undecidable to determine what is on a Turing machine's tape if it halts (you can reduce the halting problem to this problem). Consequently, no matter how clever you are, and no matter how good of a static analyzer you build, you will never be able to correctly statically resolve all function calls. You can't even bound the set of functions that might be called here, since the Turing machine's output might define some function that is then executed by the above code.
What you can do is compile code that, whenever a function is called, includes extra logic to resolve the call, and possibly uses techniques like inline caching to speed things up. Additionally, in some cases you might be able to prove that a certain function is being called (or that one of a small number of functions will be called) and can then hardcode in those calls. You could also compile multiple versions of a piece of code, one for each common type (object, numeric, etc.), then emit code to jump to the appropriate compiled trace based on the dynamic type.

V8 does that. See Compile JavaScript to Native Code with V8
With EcmaScript 3 and 5 non-strict there are a number of wrinkles around scopes which you don't run into in other dynamic languages. You might think that it is easy to do compiler optimizations on local variables, but there are edge cases in the language when it is not, even ignoring eval's scope introspection.
Consider
function f(o, x, y) {
with (o) { return x + y + z; }
}
when called with
o = {};
o = { z: 3 };
o = { x: 1, z: 2 };
Object.prototype.z = 3, o = {};
and according to EcmaScript 3,
x = (function () { return toString(); })()
should produce quite a different result from
x = toString();
because EcmaScript 3 defines an activation record as an object with a prototype chain.

Related

Avoiding strings and hardcoded function names – advantages?

I recently came across a JavaScript script, in which the author seemed to try to avoid strings inside his code and assigned everything to a variable.
So instead of
document.addEventListener('click', (e) => { /*whatever*/ });
he would write
var doc = document;
var click = 'click';
var EventListener = 'EventListener';
var addEventListener = `add${EventListener}`;
doc[addEventListener](click, (e) => { /*whatever*/ });
While caching document into a variable can be regarded a micro optimization, I am really wondering if there is any other benefit of this practice in general - testability, speed, maintenance, anything?
Legacy IE attachEvent should be pretty much dead, so being able to quickly make the script only run in these environments can hardly be regarded an advantage I suppose.
The example you give looks pretty strange, and I can't imagine any "good practice" reason for most of those moves. My first guess was that it's the work of someone who wasn't sure what they were doing, although it's odd that they'd also be using ECMAScript 6 syntax.
Another possibility is that this is generated code (e.g. the output of some kind of visual programming tool, or a de-minifier). In that situation it's common to see this sort of excessive factoring because the code is generated from templates that are conservatively written to guard against errors; I'm thinking of the way preprocessor macros in C make liberal use of parentheses.
Sometimes variable declarations are written in a way that makes clear (to the compiler and/or the reader) what type of data the variable holds. For instance, asm.js code uses unnecessary-looking variable declarations as a trick to implement strongly-typed variables on top of regular JS. And sometimes declarations are written as a form of documentation (if you see var foo = Math.PI * 0, that's probably there to tell you that foo is an angle in radians, since otherwise the author would have just written var foo = 0.0). But that still wouldn't explain something like var click='click'.

limiting side effects when programming javascript in the browser

Limiting side effects when programming in the browser with javascript is quite tricky.
I can do things like not accessing member variables like in this silly example:
let number = 0;
const inc = (n) => {
number = number + n;
return number;
};
console.log(inc(1)); //=> 1
console.log(inc(1)); //=> 2
But what other things can I do to reduce side effects in my javascript?
Of course you can avoid side effects by careful programming. I assume your question is about how to prevent them. Your ability to do so is severely limited by the nature of the language. Here are some approaches:
Use web workers. See MDN. Web workers run in another global context that is different from the current window.:
Isolate certain kinds of logic inside iframes. Use cross-window messaging to communicate with the iframe.
Immutability libraries. See https://github.com/facebook/immutable-js. Also http://bahmutov.calepin.co/avoid-side-effects-with-immutable-data-structures.html.
Lock down your objects with Object.freeze, Object.seal, or Object.preventExtensions. In the same vein, create read-only properties on objects using Object.defineProperty with getters but no setters, or with the writeable property set to false.
Use Object.observe to get asynchronous reports on various types of changes to objects and their properties, upon which you could throw an error or take other action.
If available, use Proxy for complete control over access to an object.
For considerations on preventing access to window, see javascript sandbox a module to prevent reference to Window. Also http://dean.edwards.name/weblog/2006/11/sandbox/. Finally, Semi-sandboxing Javascript eval.
It is useful to distinguish between inward side effects and outward side effects. Inward side effects are where some other code intrudes on the state of my component. Outward side effects are where my code intrudes on the state of some other component. Inward side effects can be prevented via the IIFEs mentioned in other answers, or by ES6 modules. But the more serious concern is outward side effects, which are going to require one of the approaches mentioned above.
Just what jumps to my mind thinking about your question:
Don't pollute the global namespace. Use 'var' or 'let', those keywords limit your variables to the local scope.
"By reducing your global footprint to a single name, you significantly reduce the chance of bad interactions with other applications, widgets, or libraries." - Douglas Crockford
Use semicolons
The comment section of this article provides some good (real life) reasons to always use semicolons.
Don't declare String, Number or Boolean Objects(in case you were ever tempted to)
var m = new Number(2);
var n = 2;
m === n; // will be false because n is a Number and m is an object
"use strict"; is your friend. Enabling strict mode is a good idea, but please don't add it to existing code since it might break something and you can not really declare strict only on lexical scopes or individual scripts as stated here
Declare variables first. One common side effect is that people are not aware about JavaScript's Hoisting. Hoisting searches your block for variable declaration and moves them all together to the top of your block.
function(){
var x = 3;
// visible variables at runtime at this point: x,y,i !
// some code....
var y = 1; // y will be moved to top!
for( var i = 0; i < 10; i++ ){ // i will be moved to top!
// some code...
}
}
Here is discussed what hoisting actually means and to what kind of 'unexpected behaviour' it may lead.
use '===' instead of '=='. There are many good reasons for this and this is one of the most common 'side effects' or 'errors' in JavaScript.
For more details see this great answer on SO, but let me give you a quick demonstration:
'' == '0' // false
0 == '' // true
// can be avoided by using '==='
'' === '0' // false
0 === '' // false
Make use of IIFE. An IIFE (Immediately Invoked Function Expression) lets you declare an anonymus function which will call itself. This way you can create a new lexical scope and don't have to worry about the global namespace.
Be careful with prototypes. Keep in mind that JavaScript objects of the same class share the same prototype. Changing a prototype means changing the behaviour of all instances of the class. (Even those which are used by other scripts/frameworks) like it happened here
Object.prototype.foo = function(){...} // careful!
Those are the 'side effects' that came to my mind. Of course there is way more to take care of (meaningful variable names, consistent code style etc...) but I don't consider those things as 'side effects' since they make your code hard to maintain, but won't break it immediately.
My favorite trick is to just use a language that compiles to javascript, instead of using javascript.
However, two important tricks you can do :
start your file with "use strict";. This will turn on validation of your code and prevent usage of undeclared variables. Yes, that's a special string that the browser will know how to deal with.
use functions when needed. Javascript cannot do normal scopes, but for functions it works fine, so get (function() { })(); in your muscle memory.
Normal CS fundamentals also apply: separate your code into logical pieces, name your variables properly, always explicitly initialize variables when you need them, etc.

Correct use of the JavaScript interface keyword

First of all, no, I'm not trying to create any sort of Java-like interface for my JavaScript code. I've seen those questions all over, and while I'm still a relative novice to JavaScript, I know those aren't part of the language.
However, I'm curious what the actual intended use of the interface keyword is. For example, Math is an interface, containing definitions (but not implementations). I believe (and may be totally wrong) that these are there to provide a means for the definers of the language to enforce a set of behaviors to be implemented in various JavaScript engines. Is that correct?
Furthermore, I have a desire to have a "static class" that contains a bunch of utility methods. I like that Math.sqrt(3) has an outer namespace ('Math') which is capitalized, and a number of logically similar methods and values in it. Maybe it's just my Java/Ruby background that makes me want a capital on the grouping objects. Is that bad form?
var ShapeInspections = {
isSymmetrical: function (s) {
// determine if shape is symmetrical
},
numAngles: function (s) {
// return the number of angles
}
}
A purely contrived example, but is it anti-idiomatic to name the "module" this way?
Okay, so as with other answers, you know that the keyword interface has no real use case in Javascript world, yet.
Your Math example made me suspicous that you are talking about a design pattern, called Module Pattern, widely used for scoping Javascript code. There are many ways of making your code modular. For example just like OddDev answered you , the famous Prototype Pattern can embed your code in a modular fashion (just like your Math example). Here is the Revealing Prototype Pattern example with also private variables and functions for additional flexibility:
/* Example from:
http://www.innoarchitech.com/scalable-maintainable-javascript-modules */
var myPrototypeModule = (function (){
var privateVar = "Alex Castrounis",
count = 0;
function PrototypeModule(name){
this.name = name;
}
function privateFunction() {
console.log( "Name:" + privateVar );
count++;
}
PrototypeModule.prototype.setName = function(strName){
this.name = strName;
};
PrototypeModule.prototype.getName = function(){
privateFunction();
};
return PrototypeModule;
})();
but that is not all. Other options include Scoped module pattern, POJO module pattern and many more. Have a look at How to Write Highly Scalable and Maintainable JavaScript: Modules, it has a very simple and yet thorough set of examples.
So far, we talked about plain Javascript. If you have the ability to use libraries in your code, then amazing set of libraries such as Requirejs, CommonsJS are there to help you on this with out-of-the-box functionalities. Have a look at Addy Osmani's post about Writing Modular JavaScript With AMD, CommonJS & ES Harmony.
The interface keyword in javascript is a FutureReservedWord, so it does absolutely nothing right now, though that may change in the future specifications. (See ECMAScript 5.1, section 7.6.1.2). In the ES6 draft, this is also the same.
As for you module, this is a perfectly idiomatic solution. It is always a good idea to "namespace" your functions, as it keeps the global scope as clean as possible.
I believe (and may be totally wrong) that these are there to provide a means for the definers of the language to enforce a set of behaviors to be implemented in various JS engines. Is that correct?
No, this is not correct. Things like "Math" etc. are objects containing functions. If you use for eyample "Math.pow(...)" you just execute the function stored in the "Math" object. Check this example:
var Math = {};
Math.prototype.pow = function(){
alert("stuff");
}
var ShapeInspections = { isSymmetrical: function (s) {
// determine if shape is symmetrical }, numAngles: function (s) {
// return the number of angles } } A purely contrived example, but is it anti-idomatic to name the "module" this way?
It's okay to name your objects like this. As already discussed "Math" is also just an object and follows these naming conventions.
To make things clear for the interface keyword:
The following tokens are also considered to be FutureReservedWords
when they occur within strict mode code (see 10.1.1). The occurrence
of any of these tokens within strict mode code in any context where
the occurrence of a FutureReservedWord would produce an error must
also produce an equivalent error:
implements let private public yield
interface package protected static
It's just reserved cause it's "may" needed in the future. So don't worry too much about it :) http://www.ecma-international.org/ecma-262/5.1/#sec-7.6
Do not confuse the "interfaces" that are specified in IDL with the interface keyword.
The latter is reserved for potential future use, but is not yet actually used in ECMAScript (not even in ES6).

How does Bluebird's util.toFastProperties function make an object's properties "fast"?

In Bluebird's util.js file, it has the following function:
function toFastProperties(obj) {
/*jshint -W027*/
function f() {}
f.prototype = obj;
ASSERT("%HasFastProperties", true, obj);
return f;
eval(obj);
}
For some reason, there's a statement after the return function, which I'm not sure why it's there.
As well, it seems that it is deliberate, as the author had silenced the JSHint warning about this:
Unreachable 'eval' after 'return'. (W027)
What exactly does this function do? Does util.toFastProperties really make an object's properties "faster"?
I've searched through Bluebird's GitHub repository for any comments in the source code or an explanation in their list of issues, but I couldn't find any.
2017 update: First, for readers coming today - here is a version that works with Node 7 (4+):
function enforceFastProperties(o) {
function Sub() {}
Sub.prototype = o;
var receiver = new Sub(); // create an instance
function ic() { return typeof receiver.foo; } // perform access
ic();
ic();
return o;
eval("o" + o); // ensure no dead code elimination
}
Sans one or two small optimizations - all the below is still valid.
Let's first discuss what it does and why that's faster and then why it works.
What it does
The V8 engine uses two object representations:
Dictionary mode - in which object are stored as key - value maps as a hash map.
Fast mode - in which objects are stored like structs, in which there is no computation involved in property access.
Here is a simple demo that demonstrates the speed difference. Here we use the delete statement to force the objects into slow dictionary mode.
The engine tries to use fast mode whenever possible and generally whenever a lot of property access is performed - however sometimes it gets thrown into dictionary mode. Being in dictionary mode has a big performance penalty so generally it is desirable to put objects in fast mode.
This hack is intended to force the object into fast mode from dictionary mode.
Bluebird's Petka himself talks about it here.
These slides (wayback machine) by Vyacheslav Egorov also mentions it.
The question "*https://stackoverflow.com/questions/23455678/pros-and-cons-of-dictionary-mode*" and its accepted answer are also related.
This slightly outdated article is still a fairly good read that can give you a good idea on how objects are stored in v8.
Why it's faster
In JavaScript prototypes typically store functions shared among many instances and rarely change a lot dynamically. For this reason it is very desirable to have them in fast mode to avoid the extra penalty every time a function is called.
For this - v8 will gladly put objects that are the .prototype property of functions in fast mode since they will be shared by every object created by invoking that function as a constructor. This is generally a clever and desirable optimization.
How it works
Let's first go through the code and figure what each line does:
function toFastProperties(obj) {
/*jshint -W027*/ // suppress the "unreachable code" error
function f() {} // declare a new function
f.prototype = obj; // assign obj as its prototype to trigger the optimization
// assert the optimization passes to prevent the code from breaking in the
// future in case this optimization breaks:
ASSERT("%HasFastProperties", true, obj); // requires the "native syntax" flag
return f; // return it
eval(obj); // prevent the function from being optimized through dead code
// elimination or further optimizations. This code is never
// reached but even using eval in unreachable code causes v8
// to not optimize functions.
}
We don't have to find the code ourselves to assert that v8 does this optimization, we can instead read the v8 unit tests:
// Adding this many properties makes it slow.
assertFalse(%HasFastProperties(proto));
DoProtoMagic(proto, set__proto__);
// Making it a prototype makes it fast again.
assertTrue(%HasFastProperties(proto));
Reading and running this test shows us that this optimization indeed works in v8. However - it would be nice to see how.
If we check objects.cc we can find the following function (L9925):
void JSObject::OptimizeAsPrototype(Handle<JSObject> object) {
if (object->IsGlobalObject()) return;
// Make sure prototypes are fast objects and their maps have the bit set
// so they remain fast.
if (!object->HasFastProperties()) {
MigrateSlowToFast(object, 0);
}
}
Now, JSObject::MigrateSlowToFast just explicitly takes the Dictionary and converts it into a fast V8 object. It's a worthwhile read and an interesting insight into v8 object internals - but it's not the subject here. I still warmly recommend that you read it here as it's a good way to learn about v8 objects.
If we check out SetPrototype in objects.cc, we can see that it is called in line 12231:
if (value->IsJSObject()) {
JSObject::OptimizeAsPrototype(Handle<JSObject>::cast(value));
}
Which in turn is called by FuntionSetPrototype which is what we get with .prototype =.
Doing __proto__ = or .setPrototypeOf would have also worked but these are ES6 functions and Bluebird runs on all browsers since Netscape 7 so that's out of the question to simplify code here. For example, if we check .setPrototypeOf we can see:
// ES6 section 19.1.2.19.
function ObjectSetPrototypeOf(obj, proto) {
CHECK_OBJECT_COERCIBLE(obj, "Object.setPrototypeOf");
if (proto !== null && !IS_SPEC_OBJECT(proto)) {
throw MakeTypeError("proto_object_or_null", [proto]);
}
if (IS_SPEC_OBJECT(obj)) {
%SetPrototype(obj, proto); // MAKE IT FAST
}
return obj;
}
Which directly is on Object:
InstallFunctions($Object, DONT_ENUM, $Array(
...
"setPrototypeOf", ObjectSetPrototypeOf,
...
));
So - we have walked the path from the code Petka wrote to the bare metal. This was nice.
Disclaimer:
Remember this is all implementation detail. People like Petka are optimization freaks. Always remember that premature optimization is the root of all evil 97% of the time. Bluebird does something very basic very often so it gains a lot from these performance hacks - being as fast as callbacks isn't easy. You rarely have to do something like this in code that doesn't power a library.
V8 developer here. The accepted answer is a great explanation, I just wanted to highlight one thing: the so-called "fast" and "slow" property modes are unfortunate misnomers, they each have their pros and cons. Here is a (slightly simplified) overview of the performance of various operations:
struct-like properties
dictionary properties
adding a property to an object
--
+
deleting a property
---
+
reading/writing a property, first time
-
+
reading/writing, cached, monomorphic
+++
+
reading/writing, cached, few shapes
++
+
reading/writing, cached, many shapes
--
+
colloquial name
"fast"
"slow"
So as you can see, dictionary properties are actually faster for most of the lines in this table, because they don't care what you do, they just handle everything with solid (though not record-breaking) performance. Struct-like properties are blazing fast for one particular situation (reading/writing the values of existing properties, where every individual place in the code only sees very few distinct object shapes), but the price they pay for that is that all other operations, in particular those that add or remove properties, become much slower.
It just so happens that the special case where struct-like properties have their big advantage (+++) is particularly frequent and really important for many apps' performance, which is why they acquired the "fast" moniker. But it's important to realize that when you delete properties and V8 switches the affected objects to dictionary mode, then it isn't being dumb or trying to be annoying: rather it attempts to give you the best possible performance for what you're doing. We have landed patches in the past that have achieved significant performance improvements by making more objects go to dictionary ("slow") mode sooner when appropriate.
Now, it can happen that your objects would generally benefit from struct-like properties, but something your code does causes V8 to transition them to dictionary properties, and you'd like to undo that; Bluebird had such a case. Still, the name toFastProperties is a bit misleading in its simplicity; a more accurate (though unwieldy) name would be spendTimeOptimizingThisObjectAssumingItsPropertiesWontChange, which would indicate that the operation itself is costly, and it only makes sense in certain limited cases. If someone took away the conclusion "oh, this is great, so I can happily delete properties now, and just call toFastProperties afterwards every time", then that would be a major misunderstanding and cause pretty bad performance degradation.
If you stick with a few simple rules of thumb, you'll never have a reason to even try to force any internal object representation changes:
Use constructors, and initialize all properties in the constructor. (This helps not only your engine, but also understandability and maintainability of your code. Consider that TypeScript doesn't quite force this but strongly encourages it, because it helps engineering productivity.)
Use classes or prototypes to install methods, don't just slap them onto each object instance. (Again, this is a common best practice for many reasons, one of them being that it's faster.)
Avoid delete. When properties come and go, prefer using a Map over the ES5-era "object-as-map" pattern. When an object can toggle into and out of a certain state, prefer boolean (or equivalent) properties (e.g. o.has_state = true; o.has_state = false;) over adding and deleting an indicator property.
When it comes to performance, measure, measure, measure. Before you start sinking time into performance improvements, profile your app to see where the hotspots are. When you implement a change that you hope will make things faster, verify with your real app (or something extremely close to it; not just a 10-line microbenchmark!) that it actually helps.
Lastly, if your team lead tells you "I've heard that there are 'fast' and 'slow' properties, please make sure that all of ours are 'fast'", then point them at this post :-)
Reality from 2021 (NodeJS version 12+).
Seems like a huge optimization is done, objects with deleted fields and sparse arrays don't become slow. Or I'm missing smth?
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj1 = new Point(1, 2);
var obj2 = new Point(3, 4);
delete obj2.y;
var arr = [1,2,3]
arr[100] = 100
console.log('obj1 has fast properties:', %HasFastProperties(obj1));
console.log('obj2 has fast properties:', %HasFastProperties(obj2));
console.log('arr has fast properties:', %HasFastProperties(arr));
both show true
obj1 has fast properties: true
obj2 has fast properties: true
arr has fast properties: true
// run in Node with enabled flag
// node --allow-natives-syntax script.js
function Point(x, y) {
this.x = x;
this.y = y;
}
var obj2 = new Point(3, 4);
console.log('obj has fast properties:', %HasFastProperties(obj2)) // true
delete obj2.y;
console.log('obj2 has fast properties:', %HasFastProperties(obj2)); //true
var obj = {x : 1, y : 2};
console.log('obj has fast properties:', %HasFastProperties(obj)) //true
delete obj.x;
console.log('obj has fast properties:', %HasFastProperties(obj)); //fasle
Function and object look different

Getting data back out of closures

Is there a way to extract a variable that is closed over by a function?
In the (JavaScript-like) R language values that are
closed-over can be accessed by looking up the function's scope
directly. For example, the constant combinator takes a value and returns
a function that always yields said value.
K = function (self) {
function () {
self
}
}
TenFunction = K(10)
TenFunction()
10
In R the value bound to "self" can be looked up directly.
environment(TenFunction)[[ "self" ]]
10
In R this is a perfectly normal and acceptable thing to want to do. Is
there a similar mechanism in JavaScript?
My motivation is that I'm working with functions that I
create with an enclosed value called "self". I'd like to be able
to extract that data back out of the function. A mock example loosely
related to my problem is.
var Velocity = function (self) {
return function (time) {
return self.vx0 + self.ax * time
}
}
var f = Velocity({vx0: 10, ax: 100})
I'd really like to extract the values of self.vx0 and self.ax as they are
difficult to recover by other means. Is there a function "someFun" that does this?
someFun(f).self
{vx0: 10, ax: 100}
Any help or insights would be appreciated. If any clarification is needed leave a comment below and I'll edit my question.
Not as you have described, no. Function objects support very few reflective methods, most of which are deprecated or obsolete. There is a good reason for this: while closures are a common way to implement lexically scoped functions, they are not the only way, and in some cases they may not be the fastest. Javascript probably avoids exposing such details to allow implementations more flexibility to improve performance.
That said, you can get around this in various ways. One approach is to add an argument to the inner function telling it that it should return the value of a certain variable rather than doing what it usually does. Alternatively, you can store the variable alongside the function.
For an example of an alternative implementation technique, look up "lambda lifting". Some implementations may use different approaches in different situations.
Edit
An even better reason not to allow that sort of reflection is that it breaks the function abstraction rather horribly, and in doing so exposes hairy details of how the function was produced. If you want that sort of access, you really want an object, not a function.

Categories

Resources