Limiting values to a variable in Node.js - javascript

I'm not sure if this question makes sense but when I'm using an IDE and for example I type:
x = 5
s = typeof(x)
if (s === ) //Cursor selection after === (before i finished the statement)
the IDE's autocomplete feature gives me a list of the possible values. And if I type value that doesn't exist in that list it's highlighted, and when I execute the program (in other examples, not this one), it throws an error.
I want to achieve a similar functionality with my own variables so that I can only assign specific values to it.

I'd recommend using TypeScript. See this question.
For example, your code will throw a TypeScript error if you try to compare the result of typeof against anything that isn't a proper JavaScript type:
x = 5
s = typeof(x)
if (s === 'foobar') {
}
results in
In a reasonable IDE with TypeScript (such as VSCode), any line of code that doesn't make sense from a type perspective will be highlighted with the error.
If you want to permit only particular values for some variable, you can use | to alternate, eg:
let someString: 'someString' | 'someOtherString' = 'someString';
will mean that only those two strings can occur when doing
someString =
later.
TypeScript makes writing large applications so much easier. It does take some time to get used to, but it's well worth it IMO. It turns many hard-to-debug runtime errors into (usually) trivially-fixable compile-time errors.

Related

Is it possible, in JavaScript, to eliminate code branches by tracking their arguments?

I tried to find some tool that would do this, but prepack is not quite doing what I want and other tools are failing at this particular example.
I think it's quite common in programming to use functions in which some of the arguments are constants and the remainder are variables. A good example in JS is parseInt(someNumberToParse, 10).
If I were to write a similar function like this in JS and only ever use it with 10 as a second argument, we could optimise our function to remove logic for other bases, however all tools that I know of fail at this to some degree.
This makes me wonder, would it be possible to write a program that could take any code (module or script) that would track values and types as the code is parsed?
TypeScript is doing it in a way - I can use type guards to narrow down possible types.
I know that for modules, values are most often not known, but a silly example like this:
const add = (a, b = undefined) => {
if (b > 10) {
return a + b
}
return a + a
}
export const mul2 = a => add(a)
In this case we know that b will never be larger than 10 and we can safely remove the condition.
In effect, we could just optimise to
export const mul2 = a => a + a
Sometimes we can't know the value, but we know it has a certain type(s) or the value matches some conditions.
Is it possible to do that in JS? Would it be possible to do it in a real program (possibly megabytes of code)?

Is there a way to protect your code from Javascript quirk `Array(length) vs Array(el1,el2,...)` with Typescript

I'm not looking for other ways to copy an array. My question is particularly about types.
Typescript has nothing against this kind of code (playground):
const sum = original_numbers => {
const numbers_copy = new Array(...original_numbers) // here is the problem
const res = numbers_copy.reduce((acc,v) => acc+v,0)
console.log(res)
return res
}
sum([1,2]) // 3 as expected
sum([1 ]) // 0 !!! And typescript doesn't complain. You can check the playground.
Is there anything that can be done about it?
The Array constructor has problems (including this one). With TSLint, you can use the prefer-array-literal rule to forbid such uses of new Array.
For ESLint, you can use no-array-constructor.
To create an array from arguments, consider using Array.of instead.
The Array constructor is problematic enough that there's no "good" way to give it a type definition that works for all use cases. The standard library's definition for it looks like
interface ArrayConstructor {
new(arrayLength?: number): any[];
new <T>(arrayLength: number): T[];
new <T>(...items: T[]): T[];
}
so you are likely to get an array with elements of type any, which is intentionally unsound, letting you do things that might not be safe without complaint.
If you want a safer version of this you might want to use declaration merging (and see global augmentation if your code is in a module) to use the unknown type instead:
interface ArrayConstructor {
new(arrayLength?: number): unknown[];
}
At this point, your original code will error like this:
const sum_ = (original_numbers: number[]) => {
const numbers_copy = new Array(...original_numbers) // unknown[]
const res = numbers_copy.reduce((acc, v) => acc + v, 0) // error!
// ---------------------------------------> ~~~ ~
// object is of type unknown
console.log(res)
return res
}
because numbers_copy turns out to maybe not be an array of numbers. If you want to fix that, then you need to change how you call new Array() so that the compiler is convinced you're getting a number[]. Possibly like this:
const sum = (original_numbers: number[]) => {
const numbers_copy = new Array(0, 0, ...original_numbers) // number[]
const res = numbers_copy.reduce((acc, v) => acc + v, 0) // okay
console.log(res)
return res
}
sum([1, 2]) // 3 as expected
sum([1]) // 1 as expected
That's kind of silly, but adding two more 0s ensures that you're getting an actual array of numbers, and 0 doesn't affect the sum. Obviously the "right" thing to do is to stay away from new Array(...someOtherArray) and use one of the many well-behaved methods to copy arrays. You know that, though.
The underlying conflict you're having is that you maybe expected TypeScript to save you from this automatically. The problem is that TypeScript is a valiant effort to give a useful static type system to JavaScript, which is dynamically typed and arguably weakly typed. It can do all sorts of crazy things at runtime, some of which people actually rely on. If TypeScript were to be tightened up so much as to prevent these, it would end up being a legalistic and annoying language to use.
So TypeScript is, for better or worse, unsound in places. This is summed up in TypeScript Design Non-Goal #3: it is not a design goal of TypeScript to apply a sound or "provably correct" type system. Instead, the goal is to strike a balance between correctness and productivity.
The particular point at which correctness becomes more or less importance than productivity (or where productivity is actually reduced by having to fix potentially catchable issues at runtime) is subjective and different people have different opinions. For example, you might want to generalize the problem here where any in a library signature hides potential problems, and you'd prefer that unknown be used everywhere instead. There's an open issue for that: microsoft/TypeScript#26188. If so, you might want to give that a đź‘Ť and give a compelling description of why you want it.
I've often joked that there should be a TypeScript Unsoundness Support Group to help people deal with the unfortunate reality of the language's type safety limitations. Especially because those of us who have more or less come to accept them can come off as flippant when others run into them for the first time.
Playground link to code

Why is <= slower than < using this code snippet in V8?

I am reading the slides Breaking the Javascript Speed Limit with V8, and there is an example like the code below. I cannot figure out why <= is slower than < in this case, can anybody explain that? Any comments are appreciated.
Slow:
this.isPrimeDivisible = function(candidate) {
for (var i = 1; i <= this.prime_count; ++i) {
if (candidate % this.primes[i] == 0) return true;
}
return false;
}
(Hint: primes is an array of length prime_count)
Faster:
this.isPrimeDivisible = function(candidate) {
for (var i = 1; i < this.prime_count; ++i) {
if (candidate % this.primes[i] == 0) return true;
}
return false;
}
[More Info] the speed improvement is significant, in my local environment test, the results are as follows:
V8 version 7.3.0 (candidate)
Slow:
time d8 prime.js
287107
12.71 user
0.05 system
0:12.84 elapsed
Faster:
time d8 prime.js
287107
1.82 user
0.01 system
0:01.84 elapsed
Other answers and comments mention that the difference between the two loops is that the first one executes one more iteration than the second one. This is true, but in an array that grows to 25,000 elements, one iteration more or less would only make a miniscule difference. As a ballpark guess, if we assume the average length as it grows is 12,500, then the difference we might expect should be around 1/12,500, or only 0.008%.
The performance difference here is much larger than would be explained by that one extra iteration, and the problem is explained near the end of the presentation.
this.primes is a contiguous array (every element holds a value) and the elements are all numbers.
A JavaScript engine may optimize such an array to be an simple array of actual numbers, instead of an array of objects which happen to contain numbers but could contain other values or no value. The first format is much faster to access: it takes less code, and the array is much smaller so it will fit better in cache. But there are some conditions that may prevent this optimized format from being used.
One condition would be if some of the array elements are missing. For example:
let array = [];
a[0] = 10;
a[2] = 20;
Now what is the value of a[1]? It has no value. (It isn't even correct to say it has the value undefined - an array element containing the undefined value is different from an array element that is missing entirely.)
There isn't a way to represent this with numbers only, so the JavaScript engine is forced to use the less optimized format. If a[1] contained a numeric value like the other two elements, the array could potentially be optimized into an array of numbers only.
Another reason for an array to be forced into the deoptimized format can be if you attempt to access an element outside the bounds of the array, as discussed in the presentation.
The first loop with <= attempts to read an element past the end of the array. The algorithm still works correctly, because in the last extra iteration:
this.primes[i] evaluates to undefined because i is past the array end.
candidate % undefined (for any value of candidate) evaluates to NaN.
NaN == 0 evaluates to false.
Therefore, the return true is not executed.
So it's as if the extra iteration never happened - it has no effect on the rest of the logic. The code produces the same result as it would without the extra iteration.
But to get there, it tried to read a nonexistent element past the end of the array. This forces the array out of optimization - or at least did at the time of this talk.
The second loop with < reads only elements that exist within the array, so it allows an optimized array and code.
The problem is described in pages 90-91 of the talk, with related discussion in the pages before and after that.
I happened to attend this very Google I/O presentation and talked with the speaker (one of the V8 authors) afterward. I had been using a technique in my own code that involved reading past the end of an array as a misguided (in hindsight) attempt to optimize one particular situation. He confirmed that if you tried to even read past the end of an array, it would prevent the simple optimized format from being used.
If what the V8 author said is still true, then reading past the end of the array would prevent it from being optimized and it would have to fall back to the slower format.
Now it's possible that V8 has been improved in the meantime to efficiently handle this case, or that other JavaScript engines handle it differently. I don't know one way or the other on that, but this deoptimization is what the presentation was talking about.
I work on V8 at Google, and wanted to provide some additional insight on top of the existing answers and comments.
For reference, here's the full code example from the slides:
var iterations = 25000;
function Primes() {
this.prime_count = 0;
this.primes = new Array(iterations);
this.getPrimeCount = function() { return this.prime_count; }
this.getPrime = function(i) { return this.primes[i]; }
this.addPrime = function(i) {
this.primes[this.prime_count++] = i;
}
this.isPrimeDivisible = function(candidate) {
for (var i = 1; i <= this.prime_count; ++i) {
if ((candidate % this.primes[i]) == 0) return true;
}
return false;
}
};
function main() {
var p = new Primes();
var c = 1;
while (p.getPrimeCount() < iterations) {
if (!p.isPrimeDivisible(c)) {
p.addPrime(c);
}
c++;
}
console.log(p.getPrime(p.getPrimeCount() - 1));
}
main();
First and foremost, the performance difference has nothing to do with the < and <= operators directly. So please don't jump through hoops just to avoid <= in your code because you read on Stack Overflow that it's slow --- it isn't!
Second, folks pointed out that the array is "holey". This was not clear from the code snippet in OP's post, but it is clear when you look at the code that initializes this.primes:
this.primes = new Array(iterations);
This results in an array with a HOLEY elements kind in V8, even if the array ends up completely filled/packed/contiguous. In general, operations on holey arrays are slower than operations on packed arrays, but in this case the difference is negligible: it amounts to 1 additional Smi (small integer) check (to guard against holes) each time we hit this.primes[i] in the loop within isPrimeDivisible. No big deal!
TL;DR The array being HOLEY is not the problem here.
Others pointed out that the code reads out of bounds. It's generally recommended to avoid reading beyond the length of arrays, and in this case it would indeed have avoided the massive drop in performance. But why though? V8 can handle some of these out-of-bound scenarios with only a minor performance impact. What's so special about this particular case, then?
The out-of-bounds read results in this.primes[i] being undefined on this line:
if ((candidate % this.primes[i]) == 0) return true;
And that brings us to the real issue: the % operator is now being used with non-integer operands!
integer % someOtherInteger can be computed very efficiently; JavaScript engines can produce highly-optimized machine code for this case.
integer % undefined on the other hand amounts to a way less efficient Float64Mod, since undefined is represented as a double.
The code snippet can indeed be improved by changing the <= into < on this line:
for (var i = 1; i <= this.prime_count; ++i) {
...not because <= is somehow a superior operator than <, but just because this avoids the out-of-bounds read in this particular case.
TL;DR The slower loop is due to accessing the Array 'out-of-bounds', which either forces the engine to recompile the function with less or even no optimizations OR to not compile the function with any of these optimizations to begin with (if the (JIT-)Compiler detected/suspected this condition before the first compilation 'version'), read on below why;
Someone just has to say this (utterly amazed nobody already did):
There used to be a time when the OP's snippet would be a de-facto example in a beginners programming book intended to outline/emphasize that 'arrays' in javascript are indexed starting at 0, not 1, and as such be used as an example of a common 'beginners mistake' (don't you love how I avoided the phrase 'programing error' ;)): out-of-bounds Array access.
Example 1:
a Dense Array (being contiguous (means in no gaps between indexes) AND actually an element at each index) of 5 elements using 0-based indexing (always in ES262).
var arr_five_char=['a', 'b', 'c', 'd', 'e']; // arr_five_char.length === 5
// indexes are: 0 , 1 , 2 , 3 , 4 // there is NO index number 5
Thus we are not really talking about performance difference between < vs <= (or 'one extra iteration'), but we are talking:
'why does the correct snippet (b) run faster than erroneous snippet (a)'?
The answer is 2-fold (although from a ES262 language implementer's perspective both are forms of optimization):
Data-Representation: how to represent/store the Array internally in memory (object, hashmap, 'real' numerical array, etc.)
Functional Machine-code: how to compile the code that accesses/handles (read/modify) these 'Arrays'
Item 1 is sufficiently (and correctly IMHO) explained by the accepted answer, but that only spends 2 words ('the code') on Item 2: compilation.
More precisely: JIT-Compilation and even more importantly JIT-RE-Compilation !
The language specification is basically just a description of a set of algorithms ('steps to perform to achieve defined end-result'). Which, as it turns out is a very beautiful way to describe a language.
And it leaves the actual method that an engine uses to achieve specified results open to the implementers, giving ample opportunity to come up with more efficient ways to produce defined results.
A spec conforming engine should give spec conforming results for any defined input.
Now, with javascript code/libraries/usage increasing, and remembering how much resources (time/memory/etc) a 'real' compiler uses, it's clear we can't make users visiting a web-page wait that long (and require them to have that many resources available).
Imagine the following simple function:
function sum(arr){
var r=0, i=0;
for(;i<arr.length;) r+=arr[i++];
return r;
}
Perfectly clear, right? Doesn't require ANY extra clarification, Right? The return-type is Number, right?
Well.. no, no & no... It depends on what argument you pass to named function parameter arr...
sum('abcde'); // String('0abcde')
sum([1,2,3]); // Number(6)
sum([1,,3]); // Number(NaN)
sum(['1',,3]); // String('01undefined3')
sum([1,,'3']); // String('NaN3')
sum([1,2,{valueOf:function(){return this.val}, val:6}]); // Number(9)
var val=5; sum([1,2,{valueOf:function(){return val}}]); // Number(8)
See the problem ? Then consider this is just barely scraping the massive possible permutations...
We don't even know what kind of TYPE the function RETURN until we are done...
Now imagine this same function-code actually being used on different types or even variations of input, both completely literally (in source code) described and dynamically in-program generated 'arrays'..
Thus, if you were to compile function sum JUST ONCE, then the only way that always returns the spec-defined result for any and all types of input then, obviously, only by performing ALL spec-prescribed main AND sub steps can guarantee spec conforming results (like an unnamed pre-y2k browser).
No optimizations (because no assumptions) and dead slow interpreted scripting language remains.
JIT-Compilation (JIT as in Just In Time) is the current popular solution.
So, you start to compile the function using assumptions regarding what it does, returns and accepts.
you come up with checks as simple as possible to detect if the function might start returning non-spec conformant results (like because it receives unexpected input).
Then, toss away the previous compiled result and recompile to something more elaborate, decide what to do with the partial result you already have (is it valid to be trusted or compute again to be sure), tie in the function back into the program and try again. Ultimately falling back to stepwise script-interpretation as in spec.
All of this takes time!
All browsers work on their engines, for each and every sub-version you will see things improve and regress. Strings were at some point in history really immutable strings (hence array.join was faster than string concatenation), now we use ropes (or similar) which alleviate the problem. Both return spec-conforming results and that is what matters!
Long story short: just because javascript's language's semantics often got our back (like with this silent bug in the OP's example) does not mean that 'stupid' mistakes increases our chances of the compiler spitting out fast machine-code. It assumes we wrote the 'usually' correct instructions: the current mantra we 'users' (of the programming language) must have is: help the compiler, describe what we want, favor common idioms (take hints from asm.js for basic understanding what browsers can try to optimize and why).
Because of this, talking about performance is both important BUT ALSO a mine-field (and because of said mine-field I really want to end with pointing to (and quoting) some relevant material:
Access to nonexistent object properties and out of bounds array elements returns the undefined value instead of raising an exception. These dynamic features make programming in JavaScript convenient, but they also make it difficult to compile JavaScript into efficient machine code.
...
An important premise for effective JIT optimization is that programmers use dynamic features of JavaScript in a systematic way. For example, JIT compilers exploit the fact that object properties are often added to an object of a given type in a specific order or that out of bounds array accesses occur rarely. JIT compilers exploit these regularity assumptions to generate efficient machine code at runtime. If a code block satisfies the assumptions, the JavaScript engine executes efficient, generated machine code. Otherwise, the engine must fall back to slower code or to interpreting the program.
Source:
"JITProf: Pinpointing JIT-unfriendly JavaScript Code"
Berkeley publication,2014, by Liang Gong, Michael Pradel, Koushik Sen.
http://software-lab.org/publications/jitprof_tr_aug3_2014.pdf
ASM.JS (also doesn't like out off bound array access):
Ahead-Of-Time Compilation
Because asm.js is a strict subset of JavaScript, this specification only defines the validation logic—the execution semantics is simply that of JavaScript. However, validated asm.js is amenable to ahead-of-time (AOT) compilation. Moreover, the code generated by an AOT compiler can be quite efficient, featuring:
unboxed representations of integers and floating-point numbers;
absence of runtime type checks;
absence of garbage collection; and
efficient heap loads and stores (with implementation strategies varying by platform).
Code that fails to validate must fall back to execution by traditional means, e.g., interpretation and/or just-in-time (JIT) compilation.
http://asmjs.org/spec/latest/
and finally https://blogs.windows.com/msedgedev/2015/05/07/bringing-asm-js-to-chakra-microsoft-edge/
were there is a small subsection about the engine's internal performance improvements when removing bounds-check (whilst just lifting the bounds-check outside the loop already had an improvement of 40%).
EDIT:
note that multiple sources talk about different levels of JIT-Recompilation down to interpretation.
Theoretical example based on above information, regarding the OP's snippet:
Call to isPrimeDivisible
Compile isPrimeDivisible using general assumptions (like no out of bounds access)
Do work
BAM, suddenly array accesses out of bounds (right at the end).
Crap, says engine, let's recompile that isPrimeDivisible using different (less) assumptions, and this example engine doesn't try to figure out if it can reuse current partial result, so
Recompute all work using slower function (hopefully it finishes, otherwise repeat and this time just interpret the code).
Return result
Hence time then was:
First run (failed at end) + doing all work all over again using slower machine-code for each iteration + the recompilation etc.. clearly takes >2 times longer in this theoretical example!
EDIT 2: (disclaimer: conjecture based in facts below)
The more I think of it, the more I think that this answer might actually explain the more dominant reason for this 'penalty' on erroneous snippet a (or performance-bonus on snippet b, depending on how you think of it), precisely why I'm adament in calling it (snippet a) a programming error:
It's pretty tempting to assume that this.primes is a 'dense array' pure numerical which was either
Hard-coded literal in source-code (known excelent candidate to become a 'real' array as everything is already known to the compiler before compile-time) OR
most likely generated using a numerical function filling a pre-sized (new Array(/*size value*/)) in ascending sequential order (another long-time known candidate to become a 'real' array).
We also know that the primes array's length is cached as prime_count ! (indicating it's intent and fixed size).
We also know that most engines initially pass Arrays as copy-on-modify (when needed) which makes handeling them much more fast (if you don't change them).
It is therefore reasonable to assume that Array primes is most likely already an optimized array internally which doesn't get changed after creation (simple to know for the compiler if there is no code modifiying the array after creation) and therefore is already (if applicable to the engine) stored in an optimized way, pretty much as if it was a Typed Array.
As I have tried to make clear with my sum function example, the argument(s) that get passed higly influence what actually needs to happen and as such how that particular code is being compiled to machine-code. Passing a String to the sum function shouldn't change the string but change how the function is JIT-Compiled! Passing an Array to sum should compile a different (perhaps even additional for this type, or 'shape' as they call it, of object that got passed) version of machine-code.
As it seems slightly bonkus to convert the Typed_Array-like primes Array on-the-fly to something_else while the compiler knows this function is not even going to modify it!
Under these assumptions that leaves 2 options:
Compile as number-cruncher assuming no out-of-bounds, run into out-of-bounds problem at the end, recompile and redo work (as outlined in theoretical example in edit 1 above)
Compiler has already detected (or suspected?) out of bound acces up-front and the function was JIT-Compiled as if the argument passed was a sparse object resulting in slower functional machine-code (as it would have more checks/conversions/coercions etc.). In other words: the function was never eligable for certain optimisations, it was compiled as if it received a 'sparse array'(-like) argument.
I now really wonder which of these 2 it is!
To add some scientificness to it, here's a jsperf
https://jsperf.com/ints-values-in-out-of-array-bounds
It tests the control case of an array filled with ints and looping doing modular arithmetic while staying within bounds. It has 5 test cases:
1. Looping out of bounds
2. Holey arrays
3. Modular arithmetic against NaNs
4. Completely undefined values
5. Using a new Array()
It shows that the first 4 cases are really bad for performance. Looping out of bounds is a bit better than the other 3, but all 4 are roughly 98% slower than the best case.
The new Array() case is almost as good as the raw array, just a few percent slower.

Using Facebook's invariant vs if throw

I've been looking at various Node.js projects' source, and I've noticed that some people use invariant. From what I understood, invariant is a tool that lets you put assertions in your code, and raise errors as needed.
Question:
When would you favor using invariant vs throwing errors the traditional way?
// Using invariant
function doSomething(a, b) {
invariant(a > b, 'A should be greater than B');
}
// If throw
function doSomething(a, b) {
if(a <= b) {
throw new Error('A should be greater than B');
}
}
There are a few reasons:
It's easier to read when you want to stack them. If you have, say, 3 preconditions to validate, you always see invariant(x ..., and it's easy to see what's being checked:
function f(xs, x) {
// all the invariants are lined up, one after another
invariant(xs.type == x.type, "adding an element with the same type");
invariant(xs.length != LIST_MAX_SIZE, "the list isn't full");
invariant(fitting(x), "x is fitting right in the list");
}
Compare with the usual throw approach:
function f(xs, x) {
if (xs.type != x.type)
throw new Error("adding an element with the same type");
if (xs.length == LIST_MAX_SIZE)
throw new Error("the list isn't full");
if (!fitting(x))
throw new Error("x is fitting right in the list");
}
It makes it easy to eliminate it in release build.
It's often that you want preconditions checked in dev/test, but don't want them in release because of how slow they'd be.
If you have such an invariant function, you can use a tool like babel (or some other) to remove these calls from production builds
(this is somewhat like how D does it).
zertosh/invariant allows to add code guards
As said in the readme it is A way to provide descriptive errors in development but generic errors in production.
however it is a replication of some internal facebook's systems and imo is pretty bad documented and not maintained. Scary thing is the 4.4M uses :thinking:
nothing will be striped out of the box
if you don't have a build tool that somehow remove your message in production you will still have the original error
the usage in node is for ssr/react native, or useless outside of the "we have less lines" thing
it uses error.framesToPop which also is a facebook thing
see: https://github.com/zertosh/invariant/issues?q=is%3Aissue
Note:
A better aproach will be to wait for the es proposal throw inline and actually do
cond || throw x
cond ?? throw x
that way the error will not be evaluated anyway and stripped if cond includes a falsy var env in the browser
Usefulness in TypeScript projects
...Adding on to previous answers of making it easier to read, less lines of code, stripping from dev builds:
If you're using typescript, you can use it help narrow down types + get dev time feedback.
Imagine the scenario below:
We're reading from our filesystem in node/js, the type system has no idea what's in there, so we need a runtime check, for that we'll want an invariant method to make runtime checks like this easy.
Note:
there is a modern & popular version Facebook's invariant package called tiny-invariant which I recommend called tiny-variant: https://github.com/alexreardon/tiny-invariant

Get argument expression before evaluation

I'm trying to create an assert method in Javascript. I've been struggling with arguments.callee.caller and friends for a while, but I can't find a way to reliably get the full text of the calling function and find which match in that text called the current function.
I want to be able to use my function like this:
var four = 5;
function calculate4() { return 6; }
assert(4 == 2 + 3);
assert(4 == four);
assert(4 == calculate4());
assert(4 != 3 && 2 < 1)
and get output like this:
Assertion 4 == 2 + 3 failed.
Assertion 4 == four failed.
Assertion 4 == calculate4() failed.
Assertion 4 != 3 && 2
Right now, I can't get much beyond Assertion false failed. which isn't very useful...
I'd like to avoid passing in extra parameters (such as this) because I want to keep the assert code as clean as possible and because it will be typed many, many times. I don't really mind making it a string, but I'm concerned about issues of scoping when trying to eval() that string. If I have no other options, or if my concerns are ill-founded, please say so.
I'm running this in an .hta application on Windows, so it's really jscript and I have full access to the filesystem, ActiveX etc. so system specific solutions are fine (as long as they don't require Firebug etc.). However, I'd prefer a general solution.
There's no reliable way you can do this passing only a single argument. Even with eval, the variables used would be out of scope. Parsing arguments.caller would work if arguments.caller made only one call to assert, by searching for it and parsing the argument expression. Unfortunately, none of the proprietary tools available to you will help.
I ended up using the following function, which allows me to optionally duplicate the text of the assertion as a second argument. It seemed simplest.
function assert(expression, message)
{
if (!expression) {
if (message + "" != "undefined" && message + "" != "") {
document.write("<h2>Assertion <pre>" +
message +
"</pre> failed.</h2><br>");
} else {
document.write("<h2>Assertion failed.</h2><br>");
}
}
}
Maybe that helps someone. There are probably better methods available, but this worked for me.
Note that I've only been programming in Javascript for three days, so there's probably a number of improvements that could be made.
It is actually possible, at least in browsers and Node.js. I don't know about .hta applications.
Modern browsers, Node.js and hopefully your environment put a stack property on error objects, containing a stack trace. You can construct a new error, and then parse out the file path to the file containing the assert() call, as well as the line number and column number (if available) of the call. Then read the source file, and cut out the assert expression at the given position.
Construct an error
Parse error.stack, to get filepath, lineNumber and columnNumber
Read the file at filepath
Cut out the bits you want near lineNumber and columnNumber in file
I've written such an assert function, called yaba, that might get you going.

Categories

Resources