Why is DataView by default in Big Endian? - javascript

Could someone explain, why is DataView using the byte order Big Endian? While our computers are working with Little Endian. Also ARM processors.
Typed Arrays like Uint32Array() already use Little Endian. In my opinion this is correctly.
// DataView (Big Endian)
const dataview = new DataView(new ArrayBuffer(4));
dataview.setUint32(0, 42);
console.log(new Uint8Array(dataview.buffer).toString());
// result: 0,0,0,42
// Typed Array (Little Endian)
const typearray = new Uint32Array([42]);
console.log(new Uint8Array(typearray.buffer).toString());
// result: 42,0,0,0
I expected little endian for number types. It's not consistent.
I know the optional argument of DataView methods for litleEndian.
But my question is: Why is this not set by default?

This is the prototype for the setUint32 method as defined by the ECMAscript standard:
24.2.4.20 DataView.prototype.setUint32 ( byteOffset, value [ , littleEndian ] )( https://www.ecma-international.org/ecma-262/6.0/#sec-dataview-constructor ).
Whether the value is stored as little endian or big endian is defined by the optional 3rd parameter, whose default value is defined by the standard as false (store as big endian).
Big endian is the default byte ordering for all TCP/IP network protocols and it is not rare at all.

Related

Papers.js exportJSON (Options) how to specify the Options?

I am using Paper.js, in Javascript, because I need to debug my code.
I want to generate a Json String of my drawings, which works well.
But I need to reduce the precision.
0 is 0.0003. 510.05 = 510.05005..such things.
Documentation mentions:
————
exportJSON([options])
Exports (serializes) the project with all its layers and child items to a JSON data object or string.
Options:
options.asString: Boolean — whether the JSON is returned as a Object or a String — default: true.
options.precision: Number — the amount of fractional digits in numbers used in JSON data — default: 5.
Parameters:
options: Object — the serialization options — optional
Returns:
String — the exported JSON data
I do not understand what this means. How specify these options? Whatever I try endup in a crash.
I am programming Javascript since about 3 month, I am coming from C and Assembler languages.
Maybe my question is too simple for this forum?
I did try:
json_vect_string = layer_wall_vector.exportJSON(true, 2);
json_vect_string = layer_wall_vector.exportJSON(asString = true, precision = 2);
json_vect_string = layer_wall_vector.exportJSON(options.asString = true, options.precision = 2);

How to make JSON.parse() to treat all the Numbers as BigInt?

I have some numbers in json which overflow the Number type, so I want it to be bigint, but how?
{"foo":[[0],[64],[89],[97]],"bar":[[2323866757078990912,144636906343245838,441695983932742154,163402272522524744],[2477006750808014916,78818525534420994],[18577623609266200],[9008333127155712]]}
TLDR;
You may employ JSON.parse() reviver parameter
Detailed Solution
To control JSON.parse() behavior that way, you can make use of the second parameter of JSON.parse (reviver) - the function that pre-processes key-value pairs (and may potentially pass desired values to BigInt()).
Yet, the values recognized as numbers will still be coerced (the credit for pinpointing this issue goes to #YohanesGultom).
To get around this, you may enquote your big numbers (to turn them into strings) in your source JSON string, so that their values are preserved upon converting to bigint.
As long as you wish to convert to bigint only certain numbers, you would need to pick up appropriate criteria (e.g. to check whether the value exceeds Number.MAX_SAFE_INTEGER with Number.isSafeInteger(), as #PeterSeliger has suggested).
Thus, your problem may be solved with something, like this:
// source JSON string
const input = `{"foo":[[0],[64],[89],[97]],"bar":[[2323866757078990912,144636906343245838,441695983932742154,163402272522524744],[2477006750808014916,78818525534420994],[18577623609266200],[9008333127155712]]}`
// function that implements desired criteria
// to separate *big numbers* from *small* ones
//
// (works for input parameter num of type number/string)
const isBigNumber = num => !Number.isSafeInteger(+num)
// function that enquotes *big numbers* matching
// desired criteria into double quotes inside
// JSON string
//
// (function checking for *big numbers* may be
// passed as a second parameter for flexibility)
const enquoteBigNumber = (jsonString, bigNumChecker) =>
jsonString
.replaceAll(
/([:\s\[,]*)(\d+)([\s,\]]*)/g,
(matchingSubstr, prefix, bigNum, suffix) =>
bigNumChecker(bigNum)
? `${prefix}"${bigNum}"${suffix}`
: matchingSubstr
)
// parser that turns matching *big numbers* in
// source JSON string to bigint
const parseWithBigInt = (jsonString, bigNumChecker) =>
JSON.parse(
enquoteBigNumber(jsonString, bigNumChecker),
(key, value) =>
!isNaN(value) && bigNumChecker(value)
? BigInt(value)
: value
)
// resulting output
const output = parseWithBigInt(input, isBigNumber)
console.log("output.foo[1][0]: \n", output.foo[1][0], `(type: ${typeof output.foo[1][0]})`)
console.log("output.bar[0][0]: \n", output.bar[0][0].toString(), `(type: ${typeof output.bar[0][0]})`)
.as-console-wrapper{min-height: 100% !important;}
Note: you may find RegExp pattern to match strings of digits among JSON values not quite robust, so feel free to come up with yours (as mine was the quickest I managed to pick off the top of my head for demo purposes)
Note: you may still opt in for some library, as it was suggested by #YohanesGultom, yet adding 10k to your client bundle or 37k to your server-side dependencies (possibly, to docker image size) for that sole purpose may not be quite reasonable.

How to serialize 64-bit integer in JavaScript?

Datadog Tracing API requires 64-bit integers serialized as JSON numbers.
{
"span_id": 16956440953342013954,
"trace_id": 13756071592735822010
}
How can I create JSON with 64-bit integer numbers using JavaScript?
This is actually a lot harder than it seems.
Representing big integers in JavaScript can be done using the BigInt data type (by suffixing the number with n), which is fairly widely supported at this point.
This would make your object look like this:
const o = {
span_id: 16956440953342013954n,
trace_id: 13756071592735822010n
};
The problem presents itself in the JSON serialization, as there is currently no support for the serialization of BigInt objects. And when it comes to JSON serialization, your options for customization are very limited:
The replacer function that can be used with JSON.stringify() will let you customize the serialization behavior for BigInt objects, but will not allow you to serialize them as a raw (unquoted) string.
For the same reason, implementing the toJSON() method on the BigInt prototype will also not work.
Due to the fact that JSON.stringify() does not seem to recursively call itself internally, solutions that involve wrapping it in a proxy also will not work.
So the only option that I can find is to (at least partially) implement your own JSON serialization mechanism.
This is a very poor man's implementation that calls toString() for object properties that are of type BigInt, and delegates to JSON.stringify() otherwise:
const o = {
"span_id": 16956440953342013954n,
"trace_id": 13756071592735822010n
};
const stringify = (o) => '{'
+ Object.entries(o).reduce((a, [k, v]) => ([
...a,
`"${k}": ${typeof v === 'bigint' ? v.toString() : JSON.stringify(v)}`
])).join(', ')
+ '}';
console.log(stringify(o));
Note that the above will not work correctly in a number of cases, most prominently nested objects and arrays. If I were to do this for real-world usage, I would probably base myself on Douglas Crockford's JSON implementation. It should be sufficient to add an additional case around this line:
case "bigint":
return value.toString();

Javascript ArrayBuffer- Cannot set null value to arraybuffer

// Creating an ArrayBuffer with a size in bytes
var buffer = new ArrayBuffer(16);
// Creating views
var view1 = new DataView(buffer);
// Putting 1 in slot 0
view1.setInt8(0, null);
console.log(view1.getInt8(0));
Result:
0
Expected:
null
How to set null/empty data? Do we have a way to check null data in arraybuffer?
Eg: We have a csv file with data like this:
0,,1,0
Thank you so much
From the MDN ArrayBuffer docs (emphasis mine):
The ArrayBuffer object is used to represent a generic, fixed-length
raw binary data buffer.
I.e ArrayBuffers hold binary (Number) values only. For this reason, the DataView API will only let you set float or integer values. null, however, is not a Number. It's one of JS's primitive values.
You can further see this in the EcmaScript specification where in step 4 of the abstract SetValueInBuffer operation you have, "
Assert: Type(value) is Number." The spec does not define how to handle non-Number types, however. One could argue that a TypeError should be thrown in this case, but all the implementations I checked (Chrome, Safari, Firefox, Node.js) quietly cast the value to zero... which is what you're seeing. You'll get the same behavior if you pass a String, Date, RegEx, Boolean, or undefined.
(If you pass a BigInt or Symbol, however, you appear to get a TypeError... weird.)

Forward arraybuffer from C to JS with node-api

Im currently trying to do some low level coding with JS.
For that reason im using https://nodejs.org/api/n-api.html to add custom C code to my node.js runtime.
I get passing values and changing them in c to work, even reading arraybuffers and interpreting them the way i want to in C, but im only able to return limited JS values (numbers and strings, as seen in this part https://nodejs.org/api/n-api.html#n_api_functions_to_convert_from_c_types_to_n_api)
Does anybody know how to get N-API arraybuffers? I'd want give my JS a certain buffer i defined in C, and then work via Dataviews.
I found the answer:
https://nodejs.org/api/n-api.html#n_api_napi_create_external_arraybuffer
I was looking for different keywords than "external", but this is exactly what I looked for:
You define a buffer in C beforehand and then create a NAPI/JS array buffer which uses that underlying buffer.
napi_create_arraybuffer would clear the buffer, which could then be manipulated in C as well, but you couldn't e.g. load a file and then use that buffer. So napi_create_external_arraybuffer is the way to go.
edit: when I asked the question I was writing my open-source bachelor's thesis, so here's how I used it in the end: https://github.com/ixy-languages/ixy.js/blob/ce1d7130729860245527795e483b249a3d92a0b2/src/module.c#L112
I don‘t know if this helps (I‘m also relatively new to N-API.) but you can create an arraybuffer from a void* and a fixed length: https://nodejs.org/api/n-api.html#n_api_napi_create_arraybuffer
For example:
napi_value CreateArrayBuffer(napi_env env, napi_callback_info info) {
// the value to return
napi_value arrayBuffer;
// allocates 100 bytes for the ArrayBuffer
void* yourPointer = malloc(100 /* bytes */);
// creates your ArrayBuffer
napi_create_arraybuffer(env, 100 /* bytes */, &yourPointer, &arrayBuffer);
return arrayBuffer; // ArrayBuffer with 100 bytes length
}

Categories

Resources