The following code throws an error in javascript:
console.log(String(+0n))
But this code runs successfully:
console.log(String(-0n))
Why +0n throws an error but -0n does not?
So that it doesn't break asm.js:
Unary + followed by an expression is always either a Number, or results in throwing. For this reason, unfortunately, + on a BigInt
needs to throw, rather than being symmetrical with + on Number:
Otherwise, previously "type-declared" asm.js code would now be
polymorphic.
As Bergi highlights in the comments, this was the least bad of three options:
+BigInt -> BigInt: breaks asm.js, and anything else that made the assumption "unary plus gives a Number";
+BigInt -> Number: conflicts with the design decision to disallow implicit conversions between Number and BigInt; or
+BigInt -> error.
+0n is treated as +(BigInt(0)), since unary + means "cast to integer", and it can't automatically do that (for some reason)
console.log(+(BigInt(0)));
-0n is treated as BigInt(-0), since negative numbers can be big integers
(You need to check your console for this, since I guess there's a bug in the StackSnippets preventing BigInts from being casted to a string in the console.log call)
console.log(BigInt(-0));
Related
I am trying to make a web-app in JavaScript that converts arithmetic expressions to i486-compatible assembly. You can see it here:
http://flatassembler.000webhostapp.com/compiler.html
I have tried to make it able to deal with expressions that contain the incremental and decremental operators ("--" and "++"). Now it appears to correctly deal with expressions such as:
c++
However, in a response to an expression such as:
c--
the web-app responds with:
Tokenizer error: Unable to assign the type to the operator '-' (whether it's unary or binary).
The error message seems quite self-explanatory. Namely, I made the tokenizer assign to the "-" operator a type (unary or binary) and put the parentheses where they are needed, so that the parser can deal with the expressions such as:
10 * -2
And now, because of that, I am not able to implement the decremental operator. I've been thinking about this for days, and I can't decide what to even try. Do you have any ideas?
Please note that the web-app now correctly deals with the expressions such as:
a - -b
The way that this works in all existing languages (that I know of anyway) that have these operators, is that -- is a single token. So when you see a -, you check whether the very next character is another -. If it is, you generate a -- token (consuming both - characters). If it isn't, you generate a - token (leaving the next character in the buffer).
Then in the parser an l-expression, followed by -- token becomes a postfix decrement expression and -- followed by an l-expression becomes a prefix decrement expression. A -- token in any other position is a syntax error.
This means that spaces between -s matter: --x is a prefix decrement (or a syntax error if the language doesn't allow prefix increment and decrement), - -x is a double negative that cancels out to just x.
I should also note that in languages where postfix increment/decrement is an expression, it evaluates to the original value of the operand, not the incremented value. So if x starts out as 5, the value of x++ should be 5 and afterwards the value of x should be 6. So your current code does not actually correctly implement postfix ++ (or at least not in a way consistent with other languages). Also x++ + y++ currently produces a syntax error, so it doesn't seem like it's really supported at all.
I have gone through similar questions and answers on StackOverflow and found this:
parseInt("123hui")
returns 123
Number("123hui")
returns NaN
As, parseInt() parses up to the first non-digit and returns whatever it had parsed and Number() tries to convert the entire string into a number, why unlikely behaviour in case of parseInt('') and Number('').
I feel ideally parseInt should return NaNjust like it does with Number("123hui")
Now my next question:
As 0 == '' returns true I believe it interprets like 0 == Number('') which is true. So does the compiler really treat it like 0 == Number('') and not like 0 == parseInt('') or am I missing some points?
The difference is due in part to Number() making use of additional logic for type coercion. Included in the rules it follows for that is:
A StringNumericLiteral that is empty or contains only white space is converted to +0.
Whereas parseInt() is defined to simply find and evaluate numeric characters in the input, based on the given or detected radix. And, it was defined to expect at least one valid character.
13) If S contains a code unit that is not a radix-R digit, let Z be the substring of S consisting of all code units before the first such code unit; otherwise, let Z be S.
14) If Z is empty, return NaN.
Note: 'S' is the input string after any leading whitespace is removed.
As 0=='' returns true I believe it interprets like 0==Number('') [...]
The rules that == uses are defined as Abstract Equality.
And, you're right about the coercion/conversion that's used. The relevant step is #6:
If Type(x) is Number and Type(y) is String,
return the result of the comparison x == ToNumber(y).
To answer your question about 0==''returning true :
Below is the comparison of a number and string:
The Equals Operator (==)
Type (x) Type(y) Result
-------------------------------------------
x and y are the same type Strict Equality (===) Algorithm
Number String x == toNumber(y)
and toNumber does the following to a string argument:
toNumber:
Argument type Result
------------------------
String In effect evaluates Number(string)
“abc” -> NaN
“123” -> 123
Number('') returns 0. So that leaves you with 0==0 which is evaluated using Strict Equality (===) Algorithm
The Strict Equals Operator (===)
Type values Result
----------------------------------------------------------
Number x same value as y true
(but not NaN)
You can find the complete list # javascriptweblog.wordpress.com - truth-equality-and-javascript.
parseInt("") is NaN because the standard says so even if +"" is 0 instead (also simply because the standard says so, implying for example that "" == 0).
Don't look for logic in this because there's no deep profound logic, just history.
You are in my opinion making a BIG mistake... the sooner you correct it the better will be for your programming life with Javascript. The mistake is that you are assuming that every choice made in programming languages and every technical detail about them is logical. This is simply not true.
Especially for Javascript.
Please remeber that Javascript was "designed" in a rush and, just because of fate, it became extremely popular overnight. This forced the community to standardize it before any serious thought to the details and therefore it was basically "frozen" in its current sad state before any serious testing on the field.
There are parts that are so bad they aren't even funny (e.g. with statement or the == equality operator that is so broken that serious js IDEs warn about any use of it: you get things like A==B, B==C and A!=C even using just normal values and without any "special" value like null, undefined, NaN or empty strings "" and not because of precision problems).
Nonsense special cases are everywhere in Javascript and trying to put them in a logical frame is, unfortunately, a wasted effort. Just learn its oddities by reading a lot and enjoy the fantastic runtime environment it provides (this is where Javascript really shines... browsers and their JIT are a truly impressive piece of technology: you can write a few lines and get real useful software running on a gajillion of different computing devices).
The official standard where all oddities are enumerated is quite hard to read because aims to be very accurate, and unfortunately the rules it has to specify are really complex.
Moreover as the language gains more features the rules will get even more and more complex: for example what is for ES5 just another weird "special" case (e.g. ToPrimitive operation behavior for Date objects) becomes a "normal" case in ES6 (where ToPrimitive can be customized).
Not sure if this "normalization" is something to be happy about... the real problem is the frozen starting point and there are no easy solutions now (if you don't want to throw away all existing javascript code, that is).
The normal path for a language is starting clean and nice and symmetric and small. Then when facing real world problems the language gains (is infected by) some ugly parts (because the world is ugly and asymmetrical).
Javascript is like that. Except that it didn't start nice and clean and moreover there was no time to polish it before throwing it in the game.
There was a mistake in some Javascript code where the decimal 0.04 was declared as 0.0.4 like the following:
var x = 0.0.4;
When this was run in Firefox the error given was:
SyntaxError: missing ; before statement
IE stated:
Expected ';'
And Chrome stated:
Uncaught SyntaxError: Unexpected number
I understand that 0.0.4 is not a number or a literal but how is Javascript trying to inerpret the statement? Why exactly does it think there is a missing ; ?
Chrome
Chrome sees the first part of the assignment as a floating point number 0.0, for which you want to access property at key 0, which is not allowed in JavaScript (you cannot use dot notation to reference numeric keys). In other words, you can split the assignment to the following equivalent:
var x = 0.0
x.0 // Throws unexpected number
In comparison, this is valid:
var x = 0.0.toString()
In my opinion, this is the most logical implementation of the parser with an error that makes most sense.
Firefox
Firefox sees the first part of the assignment as the floating point number 0.0, but gets confused about the .0 part - it does not recognise this as a property access and instead thinks you want to start a new statement (.4, which is a shorthand for 0.4) - thus it tells you you missed a semicolon, which is used to terminate statements.
It still correctly interprets the statement when you use a valid property accessor (a string):
var x = 0.0.toString()
Somewhat related is also this SO question which discusses similar behaviour.
Depending on the parser (browser) the way it handles a line of code may be different. My thoughts are that when IE and Firefox see a number they only "expect" a single decimal for a number and then numerical characters after it until either a space or a ';' occurs.
If another character type is encountered the parser would throw an exception that it was looking for the end of the number - hence the Expected ';'.
In the realm of how Chrome handled it, it seems like that it can tell it was supposed to be a number, but it was given in an invalid format.
The messages from IE and Firefox make perfect sense. Parse the number part of the statement as 0.0, perform the assignment x = 0.0, then expect an end of statement. If the character following isn't a semi-colon or newline (implicit end of statement), throw an error saying we expect one.
Chrome's is a little more cryptic, but still makes sense with a little thought. It has performed the assignment as with FF and IE (x = 0.0) but has interpreted the dot as a dot-operator rather than a decimal. (x = 0.0.toString() would be perfectly valid.) Rather than stating what it expected, it's stating what it saw, an "unexpected number", which was the 4.
It's unclear whether IE and FF first parsed the .4 as a number or if the unexpected character was simply a dot.
I came across some eval code:
eval('[+!+[]+!+[]+!+[]+!+[]+!+[]]');
This code equals the integer 5.
What is this type of thing called? I've tried searching the web but I can't seem to figure out what this is referred to. I find this very interesting and would like to know where/how one learns how to print different things instead of just the integer 5. Letters, symbols and etc. Since I can't pin point a pattern in that code I've had 0 success taking from and adding to it to make different results.
Is this some type of obfuscation?
This type of obfuscation, eval() aside, is known as Non-alphanumeric obfuscation.
To be completely Non-alphanumeric, the eval would have to be performed by Array constructor prototypes functions and subscript notation:
[]["sort"]["constructor"]("string to be evaled");
These strings are then converted to non-alphanumeric form.
AFAIK, it was first proposed by Yosuke Hosogawa around 2009.
If you want to see it in action see this tool: http://www.jsfuck.com/
It is not considered a good type of obfuscation because it is easy to reverse back to the original source code, without even having to run the code (statically). Plus, it increases the file size tremendously.
But its an interesting form of obfuscation that explores JavaScript type coercion. To learn more about it I recommend this presentation, slide 33:
http://www.slideshare.net/auditmark/owasp-eu-tour-2013-lisbon-pedro-fortuna-protecting-java-script-source-code-using-obfuscation
That's called Non-alphanumeric JavaScript and it's possible because of JavaScript type coercion capabilities. There are actually some ways to call eval/Function without having to use alpha characters:
[]["filter"]["constructor"]('[+!+[]+!+[]+!+[]+!+[]+!+[]]')()
After you replace strings "filter" and "constructor" by non-alphanumeric representations you have your full non-alphanumeric JavaScript.
If you want to play around with this, there is a site where you can do it: http://www.jsfuck.com/.
Check this https://github.com/aemkei/jsfuck/blob/master/jsfuck.js for more examples like the following:
'a': '(false+"")[1]',
'b': '(Function("return{}")()+"")[2]',
'c': '([]["filter"]+"")[3]',
...
To get the value as 5, the expression should have been like this
+[!+[] + !+[] + !+[] + !+[] + !+[]]
Let us analyze the common elements first. !+[].
An empty array literal in JavaScript is considered as Falsy.
+ operator applied to an array literal, will try convert it to a number and since it is empty, JavaScript will evaluate it to 0.
! operator converts 0 to true.
So,
console.log(!+[]);
would print true. Now, the expression can be reduced like this
+[true + true + true + true + true]
Since true is treated as 1 in arithmetic expressions, the actual expression becomes
+[ 5 ]
The + operator tries to convert the array ([ 5 ]) to a number and which results in 5. That is why you are getting 5.
I don't know of any term used to describe this type of code, aside from "abusing eval()".
I find this very interesting and would like to know where/how one
learns how to print different things instead of just the integer 5.
Letters, symbols and etc. Since I can't pin point a pattern in that
code I've had 0 success taking from and adding to it to make different
results.
This part I can at least partially answer. The eval() you pasted relies heavily on Javascript's strange type coercion rules. There are a lot of pages on the web describing various strange consequences of the coercion rules, which you can easily search for. But I can't find any reference on type coercion with the specific goal of getting "surprising" output from things like eval(), unless you count this video by Destroy All Software (the Javascript part starts at 1:20). Most of them understandably focus on how to avoid strange bugs in your code. For your purpose, I believe the most useful things I know of are:
The ! operator will convert anything to a boolean.
The unary + operator will convert anything to a number.
The binary + operator will coerce its arguments to either numbers or strings before adding or concatenating them. Normally it will only go with strings if one or the other argument is already a string, but there are exceptions.
The bitwise operators output integers, regardless of input.
Converting to numbers is complicated. Booleans will go to 0 or 1. Strings will attempt to use parseInt(), or produce NaN if that fails. I forget the rest.
Converting an object to a string or number will invoke its "toString" or "toValue" method respectively, if one exists.
You get the idea. thefourtheye's answer walks through exactly how these rules apply to the example you gave. I'm not even going to try summarizing what happens when Dates, Functions, and Regexps are involved.
Is this some type of obfuscation?
Normally you'd simply get obfuscation for free as part of minification, so I have no idea why someone would write that in real production code (assuming that's where you found it).
I have the following code
if (msg.position == 0)
//removed for brevity
else if (msg.position == txtArea.value.length)
//removed for brevity
else {
//ERROR: should not reach here.
errorDivTag.innerHTML += msg.position + " " + txtArea.value.length;
}
I'm having some really weird situations where I'm getting the error in the last code block, but the printed positions show that msg.position is in fact equal to the txtArea.value.length. This only happens 1% of the time, almost as if I have some kind of race-condition in my code where the two are NOT equal during the second if statement, but equal when I print in the error message.
Any ideas?
If you use
parseInt(msg.position)
without a radix, you will run into problems with 08 and 09, because they are parsed as octal numbers and giving NaN. Always use a radix:
parseInt(msg.position, 10)
To start with, always use ===. That will prevent JavaScript from automatically coercing the types in the comparison, which means you'll be able to spot all sorts of bugs much more easily. In this case, it's possible you have some whitespace (which is basically impossible to see in the output) that is causing a string comparison instead of the (I assume) desired numeric comparison.
Also, I'm assuming you really meant to have { after your if and else if conditions. If not, that could be causing all sorts of strange behavior, depending on the code you removed due to brevity concerns. If you didn't, then you've got an extraneous } before your else condition.
UPDATE: Set a breakpoint in Firebug/DeveloperTools/DragonFly/whatever and inspect the values as the comparison occurs.
Did you try changing the statement to...
parseInt(msg.position, 10) == txtArea.value.length
=== is more strict than == and is often useful. But this is the opposite problem as what you have here, where something looks equal, but isn't == or === (if something isn't ==, it will never be ===).
Is msg.position a String? Perhaps it contains a space or another similar character.
I had this problem today with a checksum value in one of my js modules. A test was showing that two values were not equal, yet printing the values showed they were equal.
Ran it in the debugger and (re-)discovered that integer types in Javascript are 64-bit floating quantities. One of the numbers was displaying as negative in the debugger - exactly (0xFFFFFFFF+1) less than the other number. Somehow when printed, they displayed as exactly the same.
I was using a custom routine to format them in hex, which probably had something to do with it. That combination of circumstances seems unlikely in your case though.
I discovered the sign issue in my code by computing the delta between the numbers, and displaying that. It showed up as MAX_UINT32 + 1, which reminded me that these numbers are really 64-bit floats.