I have large number arrays in JS which I want to pass to C++ for processing.
IMHO the most efficient way is to let JS write directly to the C++ heap and pass a pointer as argument within a direct call, like:
var size = 4096,
BPE = Float64Array.BYTES_PER_ELEMENT,
buf = Module._malloc(size * BPE),
numbers = Module.HEAPF64.subarray(buf / BPE, buf / BPE + size),
i;
// Populate the array and process the numbers:
parseResult(result, numbers);
Module.myFunc(buf, size);
The C++ functions to process the numbers look like:
void origFunc(double *buf, unsigned int size) {
// process the data ...
}
void myFunc(uintptr_t bufAddr, unsigned int size) {
origFunc(reinterpret_cast<double*>(bufAddr), size);
}
That works as expected but I wonder if there is any chance to call the origFunc directly from Javascript to get rid of myFunc and the ugly reinterpret_cast.
When I tried to bind origFunc via:
EMSCRIPTEN_BINDINGS(test) {
function("origFunc", &origFunc, emscripten::allow_raw_pointers());
}
... and call it directly:
Module.origFunc(buf, size);
I get the error:
Uncaught UnboundTypeError: Cannot call origFunc due to unbound types: Pd
Is this a general restriction of emscripten or is there a "less dirty" solutions than the reinterpret_cast work around?
You can use a static_cast if you
specify that function takes a void * rather than a uintptr_t;
don't use EMSCRIPTEN_BINDINGS, but use the EMSCRIPTEN_KEEPALIVE + cwrap / ccall way of communicating JS->C++ . For some reason, the EMSCRIPTEN_BINDINGS way resulted in a getTypeName is not defined exception when I tried it.
So the function looks like:
extern "C" int EMSCRIPTEN_KEEPALIVE myFunc(void *bufAddr, unsigned int size) {
origFunc(static_cast<double *>(bufAddr), size);
return 0;
}
which can be called from Javascript by
Module.ccall('myFunc', 'number' ['number', 'number'], [buf, size]);
Related
I have a Node.js addon written in C++ using Nan. Works fantastically. However, I've not been able to figure out how to have my Node Javascript code pass an arbitrary data object (ex. {attr1:42, attr2:'hi', attr3:[5,4,3,2,1]}) to the C++ addon.
Until now, I've got around this by calling JSON.stringify() on my data object and then parsing the stringified JSON on the C++ side.
Ideally, I'd like to avoid copying data and just get a reference to the data object that I can access, or at least to copy it natively and avoid stringifying/parsing...
Any help would be appreciated!
You can allow your Node.js c++ addons to take arbitrary typed arguments, but you must check and handle the types explicitly. He is a simple example function that shows how to do this:
void args(const Nan::FunctionCallbackInfo<v8::Value>& info) {
int i = 0;
while (i < info.Length()) {
if (info[i]->IsBoolean()) {
printf("boolean = %s", info[i]->BooleanValue() ? "true" : "false");
} else if (info[i]->IsInt32()) {
printf("int32 = %ld", info[i]->IntegerValue());
} else if (info[i]->IsNumber()) {
printf("number = %f", info[i]->NumberValue());
} else if (info[i]->IsString()) {
printf("string = %s", *v8::String::Utf8Value(info[i]->ToString()));
} else if (info[i]->IsObject()) {
printf("[object]");
v8::Local<v8::Object> obj = info[i]->ToObject();
v8::Local<v8::Array> props = obj->GetPropertyNames();
for (unsigned int j = 0; j < props->Length(); j++) {
printf("%s: %s",
*v8::String::Utf8Value(props->Get(j)->ToString()),
*v8::String::Utf8Value(obj->Get(props->Get(j))->ToString())
);
}
} else if (info[i]->IsUndefined()) {
printf("[undefined]");
} else if (info[i]->IsNull()) {
printf("[null]");
}
i += 1;
}
}
To actually solve the problem of handling arbitrary arguments that may contain objects with arbitrary data, I would recommend writing a function that parses an actual object similar to how I parsed function arguments in this example. Keep in mind that you may need to do this recursively if you want to be able to handle nested objects within the object.
You don't have to stringify your object to pass it to c++ addons. There are methods to accept those
arbitary objects. But it is not so arbitary. You have to write different codes to parse the object in c++ .
Think of it as a schema of a database. You can not save different format data in a single collection/table.
You will need another table/collection with the specific schema.
Let's see this example:
We will pass an object {x: 10 , y: 5} to addon, and c++ addon will return another object with sum and product of the
properties like this: {x1:15,y1: 50}
In cpp code :
NAN_METHOD(func1) {
if (info.Length() > 0) {
Local<Object> obj = info[0]->ToObject();
Local<String> x = Nan::New<String>("x").ToLocalChecked();
Local<String> y = Nan::New<String>("y").ToLocalChecked();
Local<String> sum = Nan::New<String>("sum").ToLocalChecked();
Local<String> prod = Nan::New<String>("prod").ToLocalChecked();
Local<Object> ret = Nan::New<Object>();
double x1 = Nan::Get(obj, x).ToLocalChecked()->NumberValue();
double y1 = Nan::Get(obj, y).ToLocalChecked()->NumberValue();
Nan::Set(ret, sum, Nan::New<Number>(x1 + y1));
Nan::Set(ret, prod, Nan::New<Number>(x1 * y1));
info.GetReturnValue().Set(ret);
}
}
In javascript::
const addon = require('./build/Release/addon.node');
var obj = addon.func1({ 'x': 5, 'y': 10 });
console.log(obj); // { sum: 15, prod: 50 }
Here you can only send {x: (Number), y: (number)} type object to addon only. Else it will not be able to parse or
retrieve data.
Like this for the array:
In cpp:
NAN_METHOD(func2) {
Local<Array> array = Local<Array>::Cast(info[0]);
Local<String> ss_prop = Nan::New<String>("sum_of_squares").ToLocalChecked();
Local<Array> squares = New<v8::Array>(array->Length());
double ss = 0;
for (unsigned int i = 0; i < array->Length(); i++ ) {
if (Nan::Has(array, i).FromJust()) {
// get data from a particular index
double value = Nan::Get(array, i).ToLocalChecked()->NumberValue();
// set a particular index - note the array parameter
// is mutable
Nan::Set(array, i, Nan::New<Number>(value + 1));
Nan::Set(squares, i, Nan::New<Number>(value * value));
ss += value*value;
}
}
// set a non index property on the returned array.
Nan::Set(squares, ss_prop, Nan::New<Number>(ss));
info.GetReturnValue().Set(squares);
}
In javascript:
const addon = require('./build/Release/addon.node');
var arr = [1, 2, 3];
console.log(addon.func2(arr)); //[ 1, 4, 9, sum_of_squares: 14 ]
Like this, you can handle data types. If you want complex objects or operations, you just
have to mix these methods in one function and parse the data.
I am using node-ffi for integration of JS with C library. I figured out ways to pass in complex structures as IN params and get single structure as OUT param. But, I could not successfully get an array of structures from C and iterate over them in JS. I have the following C Structure and API.
typedef struct _st {
uint32_t index;
uint8_t size;
uint8_t* data;
} ST;
int getData(uint8_t len, Input* arrInputs, ST* structArray) ; // Fills structArray with dynamic # of struct objects (calloc'ed on heap)
I have emulated it on the JS side and invoking the C API as follows
var ST = StructType({
index : 'uint32',
size : 'uint8',
data : 'string'
})
var stArray = ArrayType(ST);
var Clib = ffi.Library('./CLib1', {
'getData':[ 'int', ['uint8', InputArray, stArray] ]
});
var arrData = ref.alloc(ST)
var res = Clib.getData(3, arrInputs, arrData);
I could print and check that the values are filled properly within the out parameter inside C. But I am not able to get the values printed on the JS side. It either fails with Seg fault or undefined.
Any suggestions would be of great help!
I need to send large string representing a number over a wire to the client written in Java Script. In theory it's a stream of 0s and 1s, like: 011101011101101..., but it can be very, very big (millions of bits in length). My goal, of course, is to minimize necessary processing and sending overhead. I thought about changing the base number of that string, so that it uses HEX or larger radix which would greately reduce the amount of data that has to be sent. JavaScript has built in functions for converting to and from different numbering systems so it looked like the way to go. However, the maximum supported radix is just 36. My calculations are showing that when having a stream of 50 mln bits and using 36 radix, then you still need 1,388,888 characters to be sent - way too much.
My question is - do you know any way that would help to achieve my goal? Some constraints:
Solution must work for streams of arbitrary length
bit streams can be as large as 50mln bits
Good performance should be guaranteed for length around 1-10mln bits. For larger streams it should still work, however it doesn't have to scale up linearly
I would put more emphasis on optimizing the amount of data that has to be sent, not necessarily on reducing the CPU overhead
You could do something like this (the demo has been written with no way of testing it, but it "should workâ˘", here is a fiddle for those who wants to copy/cut/paste: http://fiddle.jshell.net/sywz3aym/2/
Please note the fiddle can't run, i had intended to write a responder but i can't right now i'm afraid.
At the bottom of the javascript entry area i have comment section of how a asp.net responder could look, it uses a "Generic Handler (.ashx)" file if you use visual studio, if you use any other language you will have to use the equivalent options there. You need a request responder which you can customize to return binary data, you need to set the Content-Type to "application/octet-stream" (octet for those who do not know = "group of 8", eg. a byte :))
And here is the javascript + comment as in a glorious wall of text format:
$(document).ready(function () {
var url = 'your handler URL';
var oReq = new XMLHttpRequest();
oReq.open('GET', url, true);
oReq.responseType = 'arraybuffer';
oReq.onload = function (oEvent) {
var buffer = oReq.response;
var yourData = ExtractData(buffer);
}
oReq.send(null);
});
function ExtractData(buffer) {
var dataReader = {
dataView: new DataView(buffer),
readPtr: 0,
littleEndian: (function () {
var buffer = new ArrayBuffer(2);
new DataView(buffer).setInt16(0, 256, true);
return new Int16Array(buffer)[0] === 256;
})(),
setReadPtrOffset: function (byteIndex) {
this.readPtr = byteIndex;
},
nextInt8: function () {
var data = this.dataView.getInt8(this.readPtr);
this.readPtr += 1; // Sizeof int
return data;
},
nextUint8: function () {
var data = this.dataView.getUint8(this.readPtr);
this.readPtr += 1; // Sizeof int8
return data;
},
nextInt32: function () {
var data = this.dataView.getInt32(this.readPtr, this.littleEndian);
this.readPtr += 4; // Sizeof int32
return data;
},
nextUint32: function () {
var data = this.dataView.getUint32(this.readPtr, this.littleEndian);
this.readPtr += 4; // Sizeof uint32
return data;
},
nextFloat32: function () {
var data = this.dataView.getFloat32(this.readPtr, this.littleEndian);
this.readPtr += 4; // Sizeof float
return data;
},
nextFloat64: function () {
var data = this.dataView.getFloat64(this.readPtr, this.littleEndian);
this.readPtr += 8; // Sizeof double
return data;
},
nextUTF8String: function (length) {
var data = String.fromCharCode.apply(null, new Uint8Array(this.dataView.buffer, this.readPtr, length));
this.readPtr += length; // Sizeof int
return data;
},
}
var numberOfInt32ToRead = dataReader.nextInt32(); // First data could be, for example, the number of ints to read.
for(var i = 0; i < numberOfInt32ToRead; i++){
var someInt = dataReader.nextInt32();
// doStuffWithInt(someInt);
}
}
/*
Serverside code looks kind of like this (asp.net/c#):
public class YourVeryNiceDataRequestHandler : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
context.Response.ContentType = "application/octet-stream"; // <- very important
List<int> data = SomeMethodWithGivesYouData();
context.Response.BinaryWrite(BitConverter.GetBytes((int)data.Count)); // Explicit type casting only to help me get an overview of the data being sent, in this case it has no functional meaning other then to help me debug
foreach(int i in data)
{
context.Response.BinaryWrite(BitConverter.GetBytes((int)i));
}
// You could send structs as well, either with the help of a binary formatter, or you can do like i did here and use BitConverter.GetBytes, be carefull when sending strings/chars since they are by default (usually) in a widechar format (1 char = 2 bytes), since i only use english for this i can convert it to a UTF8 char (1 byte): context.Response.BinaryWrite(System.Text.Encoding.UTF8.GetBytes(new char[] { (motion.Phase == Phases.Concentric ? 'C' : 'E') })); // Phase (as single char)
}
}
*/
I hope this either help you directly, or guide you in the correct direction.
Please note this is VERY dependent on your data types that your server use, and might not fall into your "REST" category, you did however state that you wanted to optimize for stream size and this afaik is the best way to do that, with the exception of adding data compression
Javascript typed arrays "mimics" c-style datatypes, like those used in c, c++, c#, java and so forth
Disclamer:
I have not tried this with data > 1MB, it did however cut the total size of data being sent from server to client from 2MB down to a few 10-100~KB, and it for the amount of data i send/read the execution time were "not humanly measurable" other then the roundtrip time for the extra request, since this is requested AFTER the page is loaded in the browser
Final word of warning: Any manual bit serializing/deserializing should be done with care, since it is very easy to make a mistake forgetting that you have changed something either client or server side, and it will give you un-debuggable garbage if you read or write a byte too much/too little on either end, this is why i add explicit type casting in my server-side code, that way i can open both the server code and the client code side-by-side and match readInt32 to a Write((int)...).
Not a fun job, but it makes things very compact and very fast (normally i would go for readability, but some tasks just have to work faster then readable code).
Typed arrays can't be used in every browser however, but caniuse state that 85% of the internet could use them: http://caniuse.com/#feat=typedarrays
In my HTML
Let's say I have 2 input fields with values 3 and 4:
<form onchange="reload()">
<h2>Input:</h2>
<input type="number" id="val1" name="val1" value="3">
<input type="number" id="val2" name="val2" value="4">
<br><br>
<h2>Output</h2>
<input type="text" id="out" name="out" value="untouched by C++"><br>
</form>
In my JavaScript
I get the two values and push them into an array like so:
Module = document.getElementById('module');
var msg = [];
msg.push(Number(document.getElementById('val1').value));
msg.push(Number(document.getElementById('val2').value));
Then I send it to my C++ file to process the message
Module.postMessage(msg);
In my C++ file [ Here is where I am stuck. ]
The code I have to handle the message is below
virtual void HandleMessage(const pp::Var& var_message) {
std::string message = var_message.AsString();
pp::Var var_reply = pp::Var(message);
PostMessage(var_reply);
}
The issue is that it handles a string [actually it crashes if I my msg is of type of an array].
What I want it to expect and accept is an array or an object.
Basically, something like this:
virtual void HandleMessage(const pp::Var& var_message) {
pp::Var var_reply = var_message[0] + var_messgae[1]; // I expect this to be 3+4=7
PostMessage(var_reply);
}
Can somebody help me figure out how to expect an array or an object from JavaScript inside my C++ so that I could calculate values together and send the result back to JavaScript?
I have resolved the issue I had. The best approach is to use an object and pass the values as a JSON object, so
in JavaScript
values = {
"val1": Number(document.getElementById('val1').value),
"val2": Number(document.getElementById('val2').value)
};
msg = JSON.stringify(values);
Module.postMessage(msg);
Then handle the message and send the response back to JavaScript
in C++:
In the header you need to add picoJSON to handle JSON and sstream to work with isstringstream:
#include <sstream>
#include "picojson.h"
using namespace std;
then later in the code:
virtual void HandleMessage(const pp::Var& var_message) {
picojson::value v;
// pass the message that was sent from JavaScript as a string
// var_message.AsString() will be in form "{\"val1\":4,\"val2\":4}");
// and convert it to istringstream
istringstream iss2((string)var_message.AsString());
// parse iss2 and extract the values val1 and val2
string err = picojson::parse(v, iss2);
int val1 = (int)v.get("val1").get<double>();
int val2 = (int)v.get("val2").get<double>();
// finally send the message and you'll see the sum in the JavaScript
PostMessage( val1 + val2 );
}
The documentation hasn't been updated yet, but as of pepper_29 there is now a pp::VarArray interface for accessing arrays.
You can see the header file for the new C++ interface here.
Here's how you can use it (untested):
virtual void HandleMessage(const pp::Var& var_message) {
if (!var_message.is_array()) return;
pp::VarArray array(var_message);
// Do some stuff with the array...
uint32_t length = array.GetLength();
double sum = 0;
for (uint32_t i = 0; i < length; ++i) {
pp::Var element = array.Get(i);
if (element.is_number()) {
sum += element.AsDouble();
}
}
pp::Var var_reply(sum);
PostMessage(var_reply);
}
I have the same problem, i want to send a string array
var nativeArray = new Array();
nativeArray[0] = "Item 1"
nativeArray[1] = "Item 2"
naclModuleElement.postMessage(nativeArray)
and nothing gets called in the HandleMessage
Sending nativeArray.length works and shows '2' in NaCl side
After some investigation, i saw that there is no AsArray() function in pp::Var class
Only primitives are available
There is a class pp:VarArrayBuffer which could be used to send/recieve binary info.. this could help (did not download the example posted in it though)
Is there a way to specify type while parsing Json, so that the conversion happens automatically.
So I have the jsonData, and the x and y values needs to be number. So, the only way I can think about is looping and converting each. Any better logic, or efficient way?
var jsonData = '[{"x:"1", "y":"2"}, {"x:"3", "y":"4"}]'
var needed = [{x:1, y:2}, {x:3, y:4}]
var vals = $.parseJSON(jsonData);
//
var Coord = function(x, y){
this.x = x;
this.y = y;
}
var result = [];
function convert(vals) {
for (var i=0,l=vals.length; i<l; i++) {
var d = vals[i];
result.push(new Coord(Number(d.x), Number(d.y)));
};
}
The JSON in the jsonData variable is not valid. Only the attribute should be inside of the double quotes. Whenever you convert data to JSON, use a parser (explained on json.org) and don't write it by hand. You can always check if JSON is valid by using tools like JSONLint.
Any number (integers, decimal, float) are valid JSON data types and doesn't have to be encapsulated with double quotes.
This is valid JSON: [{"x": 1, "y": 2}, {"x": 3, "y": 4}]
However, if you don't have control over the source and retrieve the JSON with ajax you can provide a callback function to the dataFilter option. If you're using jQuery 1.5 there's also converters which are generalized dataFilter callbacks.
I suspect that the x and y coords could be a decimal number which is why I chose to use parseFloat instead of parseInt in the examples below.
Example using a dataFilter callback function (pre jQuery 1.5):
$.ajax({
url: "/foo/",
dataFilter: function(data, type){
if (type == "json") {
var json = $.parseJSON(data);
$.each(json, function(i,o){
if (o.x) {
json[i].x = parseFloat(o.x);
}
if (o.y) {
json[i].y = parseFloat(o.y);
}
});
}
return data;
},
success: function(data){
// data should now have x and y as float numbers
}
});
Example using a converter (jQuery 1.5 or later):
$.ajaxSetup({
converters: {
"json jsoncoords": function(data) {
if (valid(data)) {
$.each(data, function(i,o){
if (o.x) {
data[i].x = parseFloat(o.x);
}
if (o.y) {
data[i].y = parseFloat(o.y);
}
});
return data;
} else {
throw exceptionObject;
}
}
}
});
$.ajax({
url: "/foo/",
dataType: "jsoncoords"
}).success(function(data){
// data should now have x and y as float numbers
});
The only way is to loop through you JSON data and convert the strings you find into numbers using parseInt("2");.
I had the same problem. In one line of code, I remove any quotes that are immediately before or after a numeral, or before a minus sign.
var chartData = $.parseJSON(rawData.replace(/"(-?\d)/g, "$1").replace(/(\d)"/g, "$1"));
In my case it was coming via AJAX from PHP code which I have control over, and I later found out I could simply use the JSON_NUMERIC_CHECK - see PHP json_encode encoding numbers as strings.
These solutions convert anything that looks like a number to a number. So if you can have something that looks like a number but needs to be treated as a string, you will need to go and figure out something else, depending on your data.
If you have control of the json source you can remove the quotes from the coordinates.
Acording to json.org, a value can be a sting, number, object, array, true, false, or null. A string is identified as a sequence of characters inclosed in douple quotes. A numeric value not contained in quotes should be interpeted as a number. You might to verify with the parser engine that is in use.
If your just consuming the json, the datafilter and converter methods, mention already, would be the most jquery way to solve the task.