Reading N-API object into C++ primitive - javascript

I've created a simple N-API module starting from the ObjectWrap boilerplate of generator-napi-module, and successfully passed data (an array containing objects with string, number and boolean properties) to JS. However, I'm unable to parse the properties of one of the same objects passed back to the native code; specifically, creating a uint32_t value from a property (a number) of the passed object.
Suppose an array of objects is created and passed to JS:
Napi::Value ObjectWrapAddon::GetSomeList(const Napi::CallbackInfo& info){
Napi::Env env = info.Env();
native_struct_one *data = NULL;
native_struct_two opts = { TRUE,FALSE,FALSE };
int retVal = native_lib_method(&data, &opts);
if(retVal!=OK) {
return Napi::Array::New(env); // return empty array
}
Napi::Array arr = Napi::Array::New(env);
uint32_t i = 0;
do {
Napi::Object tempObj = Napi::Object::New(env);
tempObj.Set("someProp", data->someVal);
arr[i] = tempObj;
i++;
data = data->next;
} while(data);
return arr;
}
Then one of those objects is passed to a native function call:
Napi::Value ObjectWrapAddon::OtherMethod(const Napi::CallbackInfo& info){
Napi::Env env = info.Env();
Napi::Object obj = info[0].As<Napi::Object>();
uint32_t temp = obj.Get("someProp").As<Napi::Number>();
return Napi::Number::New(env, temp);
}
This builds fine, but the above OtherMethod() gives an A number was expected error at uint32_t temp = obj.Get('someProp').As<Napi::Number>().
How would I create a native (C++) value from a JS object property value?

I missed two things, which allow this to work:
I was inconsistent with strings when using Get/Set. If Napi::Object::Set is used with single quotes, single quotes must be used with Napi::Object::Get; likewise for double quotes.
Uint32Value() method needs to be used per the docs (I must have removed this in my tinkering), giving: uint32_t temp = obj.Get("someProp").As<Napi::Number>().Uint32Value();
Fixing these issues provides the expected behavior and output.

Related

Print JSON path and variable result in one call in JavaScript

Is it possible to console.log something like this:
myParent.myChildData(5)
(variable literal name + value in brackets)
from a JSON object such as this:
{myParent: {myChildData: 5}}
I would like to do it with referencing the object notation ideally only once. Something like:
console.log(printExpression(myParent.myChildData))
Where printExpression I'm certainly happy to be a generic helper function that could return this. I've searched high and low, but obviously printExpression receives the actual evaluated value and this causes a road block.
You can turn JSON into a JavaScript object by using JSON.parse(jsonString).
You can store that as a variable and then console.log it.
Or you can just directly console.log the passed data like this:
console.log(JSON.parse('{"myparent":{"myChildData": 5}}').myParent.myChildData);
Edit
After understanding what exactly the helper function does, I've created a printExpression function that returns string values based on your example.
function printExpression(object, stringBefore) {
//Recursively make objects with keys as methods
let newObject = {};
for (var key in object) {
//Make sure the key exists on the object
if (object.hasOwnProperty(key)) {
let value = object[key];
//If the value is an object, just add a get method that returns the object
if (typeof(value) == "object") {
let childObject = printExpression(value, key + ".");
newObject[key] = childObject;
}
//If not, make a method that returns the wanted syntax
else {
//Form the string based on specific syntax
let str = key + "(" + value + ")";
//Check if we should add stringBefore
if (stringBefore) {
str = stringBefore + str;
}
newObject[key] = str;
}
}
}
//Return the new object
return newObject;
}
var example = printExpression(JSON.parse('{"myParent": {"myChildData": 5}}'));
console.log(example.myParent.myChildData);
How It Works
When creating the helper object, it recursively reads all the keys of the original object and makes a new object that returns the keys in an organized way. For example if the original object was { greeting: "hello" } then newObject.greeting would be "greeting(hello)" (as you said it should be).
Possible Problems
Doesn't get updated when you change the original object. I don't think this will be much of a problem as you seem to be reading static JSON data, but just letting you know.

Array with less values than length - null-overhead in json

I'm writing a simple game. It's based on a field that is 32x32 squares. Each square can contain a game element. When an element appears/moves/changes, the server sends a json object containing a 2d array that resembles the whole game field with all it's elements via websockets to all clients.
That is quite a lot of data when sent to multiple clients, multiple times per second.
So I was wondering how I could reduce that.
I thought it would help to remove everything from the big array that the client already received anyways. But if you have something like this:
var gameField = [];
gameField[6][10] = "player1";
console.log(JSON.stringify(gameField));
it converts to:
[null,null,null,null,null,null,[null,null,null,null,null,null,null,null,null,null,"player1"]]
That's really unnecessary overhead.
Couldn't it generate something like this instead?
{"6": {"10":"player1"}}
I'm using node.js for the game server btw.
So how do I reduce the data? Is there maybe a special JSON.stringify method?
You can use an object instead of an array.
var gameField = {};
gameField[6] = gameField[6] || {};
gameField[6][10] = "player1";
console.log(JSON.stringify(gameField));
You can customize the json parsing, by using a replacer function. However the parsing reviver would depend on how the inner objects are recognized. For example, if the inner objects are strings as in the example:
var gameField = [];
gameField[6] = [];
gameField[6][10] = "player1";
function replacer(key,val){
if(val instanceof Array){
return val.reduce((o,el,i)=>{
if(el)o[i] = el;
return o;
},{});
}
return val;
}
var json = JSON.stringify(gameField, replacer);
console.log("JSON: ", json);
function reviver(key,val){
if(typeof val === 'string')
return val;
return Object.keys(val).reduce((arr,k) =>{
arr[k] = val[k];
return arr;
}, [] );
}
gameField = JSON.parse(json, reviver);
console.log("Parsed: ", gameField);
If you want to remove null from the array, use underscore library.
npm install underscore --save
in your file where the related code is there,
var _ = require('underscore');
To remove null just pass
result = _.filter(result, function(item){
return item!= null;
});
result will be an array without any null.

JSON stringify values of array

I'm trying to stringify an object but don't know why it is not working as expected:
function request(url) {
this.url = url;
this.head = [];
}
var r = new request("http://test.com");
r.head["cookie"] = "version=1; skin=new";
r.head["agent"] = "Browser 1.0";
document.write(JSON.stringify(r));
I hope this object can be stringified as:
{"url":"http://test.com","head":["cookie":"version=1; skin=new", "agent":"Browser 1.0"]}
But I only get:
{"url":"http://test.com","head":[]}
How to fix it?
You want the hear property of r to be an associative array I think (like in PHP). They don't exist in JavaScript. Array's have values that are indexed by a number.
Since r.head is an object (array is object in JS) you can add properties to it with r.head["whatever property name"]="value" but these properties don't seem to be serialized to JSON when you use JSON.stringify because r.head is defined as an array and it'll only serialize the numbered index values.
To fix this you can define r.head as an object so JSON.stringify will serialize all properties.
function request(url) {
this.url = url;
this.head = {};
}
var r = new request("http://test.com");
r.head["cookie"] = "version=1; skin=new";
r.head["agent"] = "Browser 1.0";
document.write(JSON.stringify(r));
If you run the following code in your cosole (press F12 in your browser) you'd see that arrays are not serialized in the same way as objects are:
var b = [];
b.something=22
console.log(b.something);
console.log(JSON.stringify(b));//=[]
console.log(b.hasOwnProperty("something"))//=true
b = {};
b.something=22
console.log(b.something);
console.log(JSON.stringify(b));//={"something":22}
console.log(b.hasOwnProperty("something"))//=true
It would be impossible to serialize it the way you are hoping to.
With this object:
function request(url) {
this.url = url;
this.head = [];
}
This variation:
var r = new request("http://test.com");
r.head.push({"cookie": "version=1; skin=new"});
r.head.push({"agent": "Browser 1.0"});
document.write(JSON.stringify(r));
would give you:
{"url":"http://test.com","head":[{"cookie":"version=1; skin=new"},{"agent":"Browser 1.0"}]}
If you were to change the object to:
function request(url) {
this.url = url;
this.head = {};
}
var r = new request("http://test.com");
r.head["cookie"] = "version=1; skin=new";
r.head["agent"] = "Browser 1.0";
document.write(JSON.stringify(r));
would give you:
{"url":"http://test.com","head":{"cookie":"version=1; skin=new","agent":"Browser 1.0"}}
The first variation is guaranteed to give you the head values in order when you iterate it. It also has the advantage in that you can later insert things in specific order if that is of interest.
The second version, by convention, will give you the items back in the order they were inserted as long as there are no number based keys, but that ordering is not guaranteed by the ecma spec.

How are the attribute names on objects stored in Javascript?

I was always under the assumption that the keys of an object were stored as strings, and that any non-string value would be cast. So, it was under this assumption, while writing some code that had to store a small value for many thousands of keys, I converted all the keys to base 36:
// theKey is an integer
myMap[theKey.toString(36)] = theValue;
Then, I decided to see whether my assumption was actually correct, and used Chrome's profiler to check the memory usage. Roughly here are the tests I ran and the memory usage:
window.objIntegers = {};
for (i = 100000; i--) window.objIntegers[i] = 'a';
// 786kb
window.objStrings = {};
for (i = 100000; i--) window.objStrings[i.toString(36)] = 'a';
// 16.7mb!
// and the same pattern but with:
key = i + .5; // 16.7mb
key = i + ''; // 786kb
key = '0' + i; // 16.7mb
key = i + '0'; // 16.7mb
Obviously, my assumptions were off. What I'm wondering though, is how they are being stored and whether this behaviour is standard, or just some extra trickery which has been added by the Chromium/WebKit team?
This is indeed some extra trickery by V8.
A JSObject (internal C++ representation of a JS Object) has two attributes, elements and properties, where the "elements" are JS attributes with numerical indices, while the "properties" are JS attributes with string indices.
Obviously, numerical indices consume far less memory here, since the property names need not be stored.
http://code.google.com/intl/de-DE/chrome/devtools/docs/memory-analysis-101.html#primitive_objects
A typical JavaScript object posesses two arrays: one for storing named properties, another for storing numeric elements.
This can be seen from v8 source code:
http://code.google.com/p/v8/source/browse/trunk/src/objects.h#1483
// [properties]: Backing storage for properties.
...
// [elements]: The elements (properties with names that are integers).
http://code.google.com/p/v8/source/browse/trunk/src/runtime.cc#4462
MaybeObject* Runtime::SetObjectProperty(Isolate* isolate,
Handle<Object> object,
Handle<Object> key,
Handle<Object> value,
PropertyAttributes attr,
StrictModeFlag strict_mode) {
...
// Check if the given key is an array index.
uint32_t index;
if (key->ToArrayIndex(&index)) {
// In Firefox/SpiderMonkey, Safari and Opera you can access the characters
// of a string using [] notation. We need to support this too in
// JavaScript.
// In the case of a String object we just need to redirect the assignment to
// the underlying string if the index is in range. Since the underlying
// string does nothing with the assignment then we can ignore such
// assignments.
if (js_object->IsStringObjectWithCharacterAt(index)) {
return *value;
}
Handle<Object> result = JSObject::SetElement(
js_object, index, value, attr, strict_mode, set_mode);
if (result.is_null()) return Failure::Exception();
return *value;
}
if (key->IsString()) {
Handle<Object> result;
if (Handle<String>::cast(key)->AsArrayIndex(&index)) {
result = JSObject::SetElement(
js_object, index, value, attr, strict_mode, set_mode);
} else {
Handle<String> key_string = Handle<String>::cast(key);
key_string->TryFlatten();
result = JSReceiver::SetProperty(
js_object, key_string, value, attr, strict_mode);
}
if (result.is_null()) return Failure::Exception();
return *value;
}
// Call-back into JavaScript to convert the key to a string.
bool has_pending_exception = false;
Handle<Object> converted = Execution::ToString(key, &has_pending_exception);
if (has_pending_exception) return Failure::Exception();
Handle<String> name = Handle<String>::cast(converted);
if (name->AsArrayIndex(&index)) {
return js_object->SetElement(
index, *value, attr, strict_mode, true, set_mode);
} else {
return js_object->SetProperty(*name, *value, attr, strict_mode);
}
}
I won't go into the details, but note that SetObjectProperty calls either SetElement or SetProperty, depending on the key. Not sure why the check fails in your test case key = i + '0' though.
It's optimizations in Chromium. I believe it has heuristics (here's one mention of it) to determine the most efficient way to store properties internally. All that the ECMAScript spec dictates is the interface between JavaScript and the environment and says nothing about how the objects exposed to JavaScript are implemented internally.

Checking for duplicate Javascript objects

TL;DR version: I want to avoid adding duplicate Javascript objects to an array of similar objects, some of which might be really big. What's the best approach?
I have an application where I'm loading large amounts of JSON data into a Javascript data structure. While it's a bit more complex than this, assume that I'm loading JSON into an array of Javascript objects from a server through a series of AJAX requests, something like:
var myObjects = [];
function processObject(o) {
myObjects.push(o);
}
for (var x=0; x<1000; x++) {
$.getJSON('/new_object.json', processObject);
}
To complicate matters, the JSON:
is in an unknown schema
is of arbitrary length (probably not enormous, but could be in the 100-200 kb range)
might contain duplicates across different requests
My initial thought is to have an additional object to store a hash of each object (via JSON.stringify?) and check against it on each load, like this:
var myHashMap = {};
function processObject(o) {
var hash = JSON.stringify(o);
// is it in the hashmap?
if (!(myHashMap[hash])) {
myObjects.push(o);
// set the hashmap key for future checks
myHashMap[hash] = true;
}
// else ignore this object
}
but I'm worried about having property names in myHashMap that might be 200 kb in length. So my questions are:
Is there a better approach for this problem than the hashmap idea?
If not, is there a better way to make a hash function for a JSON object of arbitrary length and schema than JSON.stringify?
What are the possible issues with super-long property names in an object?
I'd suggest you create an MD5 hash of the JSON.stringify(o) and store that in your hashmap with a reference to your stored object as the data for the hash. And to make sure that there are no object key order differences in the JSON.stringify(), you have to create a copy of the object that orders the keys.
Then, when each new object comes in, you check it against the hash map. If you find a match in the hash map, then you compare the incoming object with the actual object that you've stored to see if they are truly duplicates (since there can be MD5 hash collisions). That way, you have a manageable hash table (with only MD5 hashes in it).
Here's code to create a canonical string representation of an object (including nested objects or objects within arrays) that handles object keys that might be in a different order if you just called JSON.stringify().
// Code to do a canonical JSON.stringify() that puts object properties
// in a consistent order
// Does not allow circular references (child containing reference to parent)
JSON.stringifyCanonical = function(obj) {
// compatible with either browser or node.js
var Set = typeof window === "object" ? window.Set : global.Set;
// poor man's Set polyfill
if (typeof Set !== "function") {
Set = function(s) {
if (s) {
this.data = s.data.slice();
} else {
this.data = [];
}
};
Set.prototype = {
add: function(item) {
this.data.push(item);
},
has: function(item) {
return this.data.indexOf(item) !== -1;
}
};
}
function orderKeys(obj, parents) {
if (typeof obj !== "object") {
throw new Error("orderKeys() expects object type");
}
var set = new Set(parents);
if (set.has(obj)) {
throw new Error("circular object in stringifyCanonical()");
}
set.add(obj);
var tempObj, item, i;
if (Array.isArray(obj)) {
// no need to re-order an array
// but need to check it for embedded objects that need to be ordered
tempObj = [];
for (i = 0; i < obj.length; i++) {
item = obj[i];
if (typeof item === "object") {
tempObj[i] = orderKeys(item, set);
} else {
tempObj[i] = item;
}
}
} else {
tempObj = {};
// get keys, sort them and build new object
Object.keys(obj).sort().forEach(function(item) {
if (typeof obj[item] === "object") {
tempObj[item] = orderKeys(obj[item], set);
} else {
tempObj[item] = obj[item];
}
});
}
return tempObj;
}
return JSON.stringify(orderKeys(obj));
}
And, the algorithm
var myHashMap = {};
function processObject(o) {
var stringifiedCandidate = JSON.stringifyCanonical(o);
var hash = CreateMD5(stringifiedCandidate);
var list = [], found = false;
// is it in the hashmap?
if (!myHashMap[hash] {
// not in the hash table, so it's a unique object
myObjects.push(o);
list.push(myObjects.length - 1); // put a reference to the object with this hash value in the list
myHashMap[hash] = list; // store the list in the hash table for future comparisons
} else {
// the hash does exist in the hash table, check for an exact object match to see if it's really a duplicate
list = myHashMap[hash]; // get the list of other object indexes with this hash value
// loop through the list
for (var i = 0; i < list.length; i++) {
if (stringifiedCandidate === JSON.stringifyCanonical(myObjects[list[i]])) {
found = true; // found an exact object match
break;
}
}
// if not found, it's not an exact duplicate, even though there was a hash match
if (!found) {
myObjects.push(o);
myHashMap[hash].push(myObjects.length - 1);
}
}
}
Test case for jsonStringifyCanonical() is here: https://jsfiddle.net/jfriend00/zfrtpqcL/
Maybe. For example if You know what kind object goes by You could write better indexing and searching system than JS objects' keys. But You could only do that with JavaScript and object keys are written in C...
Must Your hashing be lossless or not? If can than try to lose compression (MD5). I guessing You will lose some speed and gain some memory. By the way, do JSON.stringify(o) guarantees same key ordering. Because {foo: 1, bar: 2} and {bar: 2, foo: 1} is equal as objects, but not as strings.
Cost memory
One possible optimization:
Instead of using getJSON use $.get and pass "text" as dataType param. Than You can use result as Your hash and convert to object afterwards.
Actually by writing last sentence I though about another solution:
Collect all results with $.get into array
Sort it with buildin (c speed) Array.sort
Now You can easily spot and remove duplicates with one for
Again different JSON strings can make same JavaScript object.

Categories

Resources