In my Angular app I am making a $http.get() request to a URL that is responding with a json object. This object contains a value that is occasionally a very large number (e.g. 9106524608436223400). Looking at the network profiler in Chrome I can see that the number is coming down properly but when $http.get() has it's callback hit the number will be corrupted somewhat. I assume this is because the number is very large and not a string. Is there any way to get Angular to handle this response correctly or do I need to wrap my server's output as a string? Thanks.
Numbers in JavaScript are double precision floating point numbers. This means that they can only handle integer numbers with full precision up to 52 bits.
Any code for parsing the JSON that will represent the number as a regular number in JavaScript will be unable to give you the unchanged value.
The JSON standard doesn't specify any limitation for the range or precision for numbers. However, as JSON is based on a subset of the JavaScript syntax, one could argue that the format doesn't support numbers outside of what could be represented in JavaScript.
To safely get the value unchanged, you would need to put it as a string in the JSON, or split it up into two or more smaller numbers.
Related
I have some logic within a function that takes a string of numbers called digits like so:
6145390195186705543
I then attempt to convert with parseInt() like so:
parseInt(digits)
The result of:
digits = parseInt(digits);
is
6145390195186705000
Can someone help me understand why this is the case? and how i can get an accurate conversion?
This is another version of "broken" floating point math: Javascript uses 64 bits to store numbers as small as the size of an atom up to the number of atoms in the universe. As that is quite a broad range it cannot be stored accurately, therefore the numbers are stored in an imprecise way but can represent a very broad range. In your case 6145390195186705000 is the inaccurate version JS is able to store as 6145390195186705543 cannot be stored.
and how i can get an accurate conversion?
You cannot store an "accurate number", therefore you cannot convert it accurately. However there are some libraries that allow you to work with strings as if they were numbers, such as BigIntJS.
As this is a common problem, this is going to be solved in the next JS version, see this proposal. You can currently test it in the new version of chrome.
I perform an AJAX call to generate an ID. This ID is sent back to the client in the response and shown in an input field. I was made aware that the ID displayed in the browser is not the one generated - the last digit differs. On the serverside I serialize data to pass it back to the client using Adobe ColdFusion's own serializeJSON() function. It recognizes the sequence of digits and serializes it as a number. I logged the contents of my variables on different places in my codde, it looked fine all the way. Only the browser does not do what I want/expect.
I boiled it down to this simple sample:
var stru = {"MYID":2761602017000540006};
console.dir(stru);
The console logs 2761602017000540000 instead of 2761602017000540006
Why is that? Is this number too large to be stored in JavaScript?
Is the number too large to be stored in JavaScript?
Yes, the max safe integer is 9,007,199,254,740,991 and the number you're attempting to send is 2,761,602,017,000,540,006 (which is a factor of ~1000x larger).
This is because the JavaScript number type follows the IEEE 754 64-bit floating point number format, which doesn't allow as for as large of numbers as a 64-bit integer normally would. You can see the definition of the number type value here in the ECMAScript spec 4.3.20.
I suggest you send the ID over as a String.
In JavaScript, one has at most 53 bits for integers. so you can not put integers larger that 53bits into javascript variables, so the other way is to use strings for saving this long id . I hope that this help you
As Arash said, your number is too long (more than 53 bits).
You can have more information on this topic: Javascript long integer
The only solution seems to be using string instead of numbers
I want to send game result data as binary, partly for efficiency (sending 6 bytes per item instead of 13... that's more than halving the total amount of data to send, and as there can be a few hundred of these items, result is huge savings), and partly for obfuscation (people monitoring network activity would see seemingly random bytes instead of distinguishable data).
My "code" (not in use yet, just a prototype) is as follows:
String.fromCharCode.apply(null,somevar.toString(16).split(/(?=(?:..)+$)/).map(function(a) {return parseInt(a,16);}))
This will convert any integer value into a binary string value.
However, I seem to recall that AJAX and binary data don't mix. I'd like to know what range of values is safe to use. Should I stick to the range 32-255, or go even safer and stick to 32-127? In the case of 32-255, I can use 15 as the base in the above code and add 32 to all the numbers, so that'dw work for me.
But really I'm more interested in the character range question, and if there is any cross-browser (among browsers that support Canvas) way to transfer binary data?
AJAX and binary data does not conflict with each other. What happens is, when you make AJAX call, the data are posted as form data. When you post form data, you would usually encode the form data as application/x-www-form-url-encode. The encoded data only contain letters/numbers and certain special characters. For example, space is encoded as %20. For this reason, it may not save you any space at all even if you convert your "normal" letters to binary because eventually everything has to be encoded again.
I have a WCF service operation that returns an object with long and List<string> properties. When I test the operation in a WCF application, everything works fine and the values are correct. However, I need to be able to call the service using jQuery and JSON format. The value of the long property apparently changes when I read it back in the OnSucceed function.
After searching I've found that JSON.stringify changes big values. So in code like this:
alert(JSON.stringify(25001509088465005));
...it will show the value as 25001509088465004.
What is happening?
Demo here: http://jsfiddle.net/naveen/tPKw7/
JavaScript represents numbers using IEEE-754 double-precision (64 bit) format. As I understand it this gives you 53 bits precision, or fifteen to sixteen decimal digits. Your number has more digits than JavaScript can cope with, so you end up with an approximation.
Do you need to do maths operations on this big number? Because if its just some kind of ID you can return it as a string and avoid the problem.
(Similar questions to this have been asked on StackOverflow, but not exactly this. The nearest is probably "javascript how to convert unicode string to ascii", where there is already the remark "this has to be a dup[licate]". I have read some similar posts, but they don't answer my specific question. I've looked on the very good W3Schools site, and have also Googled it, but not found the answer that way either. So any hints here would be very much appreciated.)
I have an array of bytes being passed to a piece of JavaScript. In the JavaScript the data arrives in a string. I do not know the mechanism of transfer, as it's from a 3rd-party application. I do not know even whether the string is "wide" or "narrow".
In my JavaScript, I have some code like b = str.charCodeAt(pos);.
My problem is that a byte value such as 0x86 = 134 is coming through as character 0x2020 = 8224. This seems to be because my original byte interpreted as a Latin-1 (probably) 'dagger' character, and is then being translated to the equivalent Unicode code-point. (The problem may or may not be JavaScript's 'fault'.) Similar problems occur with other values, although the ranges 0x00..0x7F and 0xA0..0xFF seem to be fine, but most values from 0x80..0x9F are affected, in each case the value seems to be the Unicode for the original Latin-1.
Another observation is that the length of the string is what I'd expect for narrow string if the length was measured in bytes. (On the other hand, if length returns a value in abstract characters, this doesn't tell me anything.)
So, in JavaScript, is there a way at getting at the 'raw' bytes in a string, or getting a Latin-1 or ASCII character code directly, or of converting between character encodings, or defining the default encoding?
I could write my own mapping, but I'd rather not. I expect that is what I'll end up doing, but that has the feel of a kludge on a kludge.
I'm also looking into whether there's anything I can adjust in the calling application (as it could be passing the data as a wide string, although I doubt it).
Either way, though, I'd be interested in whether there is a simple JavaScript solution, or to understand why there isn't.
(If the incoming data was character data, having Unicode dealt with so automatically would be great. But it's not, it's just a binary data stream.)
Thanks.
There is no such thing as the raw bytes in a String. The EcmaScript spec defines a string as a sequence of UTF-16 code-units. That is the most fine-grained representation exposed by any interpreter have ever encountered.
On the browser there are no encoding libraries. You have to roll your own if you are trying to represent a byte array as a string and want to reencode it.
If your string already happens to be valid ASCII, then you can get the numeric value of a code unit by using the charCodeAt method.
"\n".charCodeAt(0) === 10
Start with the Javascript (Ecmascript) specs: http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-262.pdf. Is says:
8.4 The String Type
The String type is the set of all finite ordered
sequences of zero or more 16-bit unsigned integer
values (“elements”). The String type is generally
used to represent textual data in a running ECMAScript
program, in which case each element in the String is
treated as a code unit value (see Clause 6). Each
element is regarded as occupying a position within
the sequence. These positions are indexed with
nonnegative integers. The first element (if any) is
at position 0, the next element (if any) at position
1, and so on. The length of a String is the number
of elements (i.e., 16-bit values) within it. The
empty String has length zero and therefore contains
no elements.
When a String contains actual textual data, each
element is considered to be a single UTF-16 code unit.
Whether or not this is the actual storage format of a
String, the characters within a String are numbered by
their initial code unit element position as though they
were represented using UTF-16. All operations on Strings
(except as otherwise stated) treat them as sequences of
undifferentiated 16-bit unsigned integers; they do not
ensure the resulting String is in normalised form, nor
do they ensure language-sensitive results.
NOTE The rationale behind this design was to keep the
implementation of Strings as simple and high-performing
as possible. The intent is that textual data coming into
the execution environment from outside (e.g., user input,
text read from a file or received over the network, etc.)
be converted to Unicode Normalised Form C before the
running program sees it. Usually this would occur at the
same time incoming text is converted from its original
character encoding to Unicode (and would impose no additional
overhead). Since it is recommended that ECMAScript source
code be in Normalised Form C, string literals are guaranteed
to be normalised (if source text is guaranteed to be
normalised), as long as they do not contain any Unicode
escape sequences.
What charCodeAt(p) gives you is the UTF-16 value (a 16-bit number) of the character at index p in the string. Since UTF-16 directly represents Unicode's Basic Multilingual Plane (that would be code points U+0000–U+D7FF and U+E000–U+FFFF, your Latin-1 characters should be the values you expect them to be.
That fact that they are not suggests to me that you have an encoding problem with the inbound 3rd octet stream — if the conversion to UTF-16 is being done and gets the encoding of the inbound octet stream wrong, you'll get odd results.
Perhaps that it is being treated as vanilla ASCII, when in fact it is UTF-8 (or vice-versa). UTF-8 represents code points above 0x7F as 2-, 3- or 4-octet "digraphs".