Javascript ArrayBuffer- Cannot set null value to arraybuffer - javascript

// Creating an ArrayBuffer with a size in bytes
var buffer = new ArrayBuffer(16);
// Creating views
var view1 = new DataView(buffer);
// Putting 1 in slot 0
view1.setInt8(0, null);
console.log(view1.getInt8(0));
Result:
0
Expected:
null
How to set null/empty data? Do we have a way to check null data in arraybuffer?
Eg: We have a csv file with data like this:
0,,1,0
Thank you so much

From the MDN ArrayBuffer docs (emphasis mine):
The ArrayBuffer object is used to represent a generic, fixed-length
raw binary data buffer.
I.e ArrayBuffers hold binary (Number) values only. For this reason, the DataView API will only let you set float or integer values. null, however, is not a Number. It's one of JS's primitive values.
You can further see this in the EcmaScript specification where in step 4 of the abstract SetValueInBuffer operation you have, "
Assert: Type(value) is Number." The spec does not define how to handle non-Number types, however. One could argue that a TypeError should be thrown in this case, but all the implementations I checked (Chrome, Safari, Firefox, Node.js) quietly cast the value to zero... which is what you're seeing. You'll get the same behavior if you pass a String, Date, RegEx, Boolean, or undefined.
(If you pass a BigInt or Symbol, however, you appear to get a TypeError... weird.)

Related

Papers.js exportJSON (Options) how to specify the Options?

I am using Paper.js, in Javascript, because I need to debug my code.
I want to generate a Json String of my drawings, which works well.
But I need to reduce the precision.
0 is 0.0003. 510.05 = 510.05005..such things.
Documentation mentions:
————
exportJSON([options])
Exports (serializes) the project with all its layers and child items to a JSON data object or string.
Options:
options.asString: Boolean — whether the JSON is returned as a Object or a String — default: true.
options.precision: Number — the amount of fractional digits in numbers used in JSON data — default: 5.
Parameters:
options: Object — the serialization options — optional
Returns:
String — the exported JSON data
I do not understand what this means. How specify these options? Whatever I try endup in a crash.
I am programming Javascript since about 3 month, I am coming from C and Assembler languages.
Maybe my question is too simple for this forum?
I did try:
json_vect_string = layer_wall_vector.exportJSON(true, 2);
json_vect_string = layer_wall_vector.exportJSON(asString = true, precision = 2);
json_vect_string = layer_wall_vector.exportJSON(options.asString = true, options.precision = 2);

How to make JSON.parse() to treat all the Numbers as BigInt?

I have some numbers in json which overflow the Number type, so I want it to be bigint, but how?
{"foo":[[0],[64],[89],[97]],"bar":[[2323866757078990912,144636906343245838,441695983932742154,163402272522524744],[2477006750808014916,78818525534420994],[18577623609266200],[9008333127155712]]}
TLDR;
You may employ JSON.parse() reviver parameter
Detailed Solution
To control JSON.parse() behavior that way, you can make use of the second parameter of JSON.parse (reviver) - the function that pre-processes key-value pairs (and may potentially pass desired values to BigInt()).
Yet, the values recognized as numbers will still be coerced (the credit for pinpointing this issue goes to #YohanesGultom).
To get around this, you may enquote your big numbers (to turn them into strings) in your source JSON string, so that their values are preserved upon converting to bigint.
As long as you wish to convert to bigint only certain numbers, you would need to pick up appropriate criteria (e.g. to check whether the value exceeds Number.MAX_SAFE_INTEGER with Number.isSafeInteger(), as #PeterSeliger has suggested).
Thus, your problem may be solved with something, like this:
// source JSON string
const input = `{"foo":[[0],[64],[89],[97]],"bar":[[2323866757078990912,144636906343245838,441695983932742154,163402272522524744],[2477006750808014916,78818525534420994],[18577623609266200],[9008333127155712]]}`
// function that implements desired criteria
// to separate *big numbers* from *small* ones
//
// (works for input parameter num of type number/string)
const isBigNumber = num => !Number.isSafeInteger(+num)
// function that enquotes *big numbers* matching
// desired criteria into double quotes inside
// JSON string
//
// (function checking for *big numbers* may be
// passed as a second parameter for flexibility)
const enquoteBigNumber = (jsonString, bigNumChecker) =>
jsonString
.replaceAll(
/([:\s\[,]*)(\d+)([\s,\]]*)/g,
(matchingSubstr, prefix, bigNum, suffix) =>
bigNumChecker(bigNum)
? `${prefix}"${bigNum}"${suffix}`
: matchingSubstr
)
// parser that turns matching *big numbers* in
// source JSON string to bigint
const parseWithBigInt = (jsonString, bigNumChecker) =>
JSON.parse(
enquoteBigNumber(jsonString, bigNumChecker),
(key, value) =>
!isNaN(value) && bigNumChecker(value)
? BigInt(value)
: value
)
// resulting output
const output = parseWithBigInt(input, isBigNumber)
console.log("output.foo[1][0]: \n", output.foo[1][0], `(type: ${typeof output.foo[1][0]})`)
console.log("output.bar[0][0]: \n", output.bar[0][0].toString(), `(type: ${typeof output.bar[0][0]})`)
.as-console-wrapper{min-height: 100% !important;}
Note: you may find RegExp pattern to match strings of digits among JSON values not quite robust, so feel free to come up with yours (as mine was the quickest I managed to pick off the top of my head for demo purposes)
Note: you may still opt in for some library, as it was suggested by #YohanesGultom, yet adding 10k to your client bundle or 37k to your server-side dependencies (possibly, to docker image size) for that sole purpose may not be quite reasonable.

Why is DataView by default in Big Endian?

Could someone explain, why is DataView using the byte order Big Endian? While our computers are working with Little Endian. Also ARM processors.
Typed Arrays like Uint32Array() already use Little Endian. In my opinion this is correctly.
// DataView (Big Endian)
const dataview = new DataView(new ArrayBuffer(4));
dataview.setUint32(0, 42);
console.log(new Uint8Array(dataview.buffer).toString());
// result: 0,0,0,42
// Typed Array (Little Endian)
const typearray = new Uint32Array([42]);
console.log(new Uint8Array(typearray.buffer).toString());
// result: 42,0,0,0
I expected little endian for number types. It's not consistent.
I know the optional argument of DataView methods for litleEndian.
But my question is: Why is this not set by default?
This is the prototype for the setUint32 method as defined by the ECMAscript standard:
24.2.4.20 DataView.prototype.setUint32 ( byteOffset, value [ , littleEndian ] )( https://www.ecma-international.org/ecma-262/6.0/#sec-dataview-constructor ).
Whether the value is stored as little endian or big endian is defined by the optional 3rd parameter, whose default value is defined by the standard as false (store as big endian).
Big endian is the default byte ordering for all TCP/IP network protocols and it is not rare at all.

Why we can not empty Float32Array() array by assigning length equal to zero, like other normal array? In Javascript

Let's suppose we have two arrays.
var normal = new Array(1,2,3,4);
var float32 = new Float32Array(4);
Now its possible to empty normal array by
normal.length = 0;
But, In the case of float32 I am unable to empty array by
float32.lenght = 0;
Array remains same. why??
Because your Float32Array is just a view over an underlying ArrayBuffer, which itself is a fixed-length raw binary data buffer.
At the risk of simplifying a bit, once an ArrayBuffer has been assigned memory slots, it will stay in the same slots, and you won't be able to modify its byteLength*.
*Actually, you can empty the object which holds the data, by transferring its data, even if transferring it just for emptying the ArrayBuffer object makes no sense since you won't be able to change its length again (no push):
var arr = new Float32Array(56);
console.log('before transfer', arr.length);
postMessage(arr, '*', [arr.buffer]);
console.log('after transfer', arr.length);
Array remains same. why??
Float32Array is a TypedArray and as per docs
The length property is an accessor property whose set accessor
function is undefined, meaning that you can only read this property.
hence even after setting the length property to 0, its value remain as is
var arr = new Float32Array([21,31]);
arr.length = 0;
console.log(arr.length) //2
This might be a bit controversial but...
This state of affairs exists because the typed arrays are not generally meant for use by JavaScript developers. They are there to make JavaScript a more attractive target for compilers like emscripten.
Arrays in C/C++ are fixed-size (which is why you declare the size in bytes of those arrays when you create them in JS) and can only hold elements of a single type, so the idea of altering their length makes no sense and would invalidate a lot of the assumptions that C/C++ compilers get to make because of it.
The whole point of having typed arrays is to allow faster computations for expensive processes (e.g. 3D graphics) and if you had to check every access for an out-of-bounds access it would be too slow.

Invalid key path in IndexedDB: restrictions?

I'm trying to create a really simple IndexedDB with some JavaScript, but it fails in the on handler already. Apparently the browser (Chrome 57) is not able to parse the keyPath (in Basic Concepts) of my storage.
I'm following more or less these simple examples: MDN or Opera-Dev.
Suppose I want to store objects like this one in the DB:
{
"1": 23, // the unique id
"2": 'Name',
"3": 'Description',
"4": null,
"5": null
}
Here is the code:
var sStoreNodes = 'nodes';
var sIdFieldNode = '1'; // the important part
// event is fired for creating the DB and upgrading the version
request.onupgradeneeded = function(event)
{
var db = event.target.result;
// Create an objectStore for nodes. Unique key should be the id of the node, on property 1.
// So ID will be the key!
var objectStore = db.createObjectStore(
sStoreNodes,
{
// changing to a plain string works, if it is a valid identifier and not just a strigified number
'keyPath' : [ sIdFieldNode ],
'autoIncrement' : false // really important here
});
};
The error message reads like:
Uncaught DOMException: Failed to execute 'createObjectStore' on 'IDBDatabase': The keyPath option is not a valid key path.
at IDBOpenDBRequest.CCapIndexedDB.request.onupgradeneeded
I can also try to leave out the key path, but I'm wondering why this happens and want can I do about it, if I really need to use a (complex) key path.
Regarding the spec:
I'm not sure, whether this can be applied here:
A value is said to be a valid key if it is one of the following ECMAScript [ECMA-262] types: Number primitive value, String primitive value, Date object, or Array object.
and what this actually means:
If the key path is a DOMString, the value [for getting the key path] will be a DOMString equal to the key path. If the key path is a sequence, the value will be a new Array, populated by appending Strings equal to each DOMString in the sequence.
Edit This works, if you don't use a stringified number, but a string instead, which is a valid identifier (beginning with a character [a-zA-Z]). So 'keyPath' : 'b' is OK. I guess this is because this value is used for creating paths like a.b.c.
Here is the definition of a key path, from the spec:
A key path is a DOMString or sequence that defines how to extract a key from a value. A valid key path is one of:
An empty DOMString.
An identifier, which is a DOMString matching the IdentifierName production from the ECMAScript Language Specification [ECMA-262].
A DOMString consisting of two or more identifiers separated by periods (ASCII character code 46).
A non-empty sequence containing only DOMStrings conforming to the above requirements.
For a string containing an integer, clearly the first, third, and fourth options do not apply. For the second, we have to see what an IdentifierName is, which is a little complicated, but basically it has to start with a letter, underscore, or dollar sign. This means that a string containing just an integer is not a valid key path.
If you really do have an object where the primary key is in a field whose name is a string containing an integer, you can either rename the field or not use key paths (in which case you have to manually specify the key as the second argument to IDBObjectStore.add and IDBObjectStore.put).
You linked to the definition for a key, which defines the valid values that a key can have (like for an object {a: 1} where the key path is 'a' the key is 1, which is valid).
The other thing you linked to is about key paths like a.b.c referencing {a: {b: {c: 1}}}.

Categories

Resources