If we implement a DataStructure in JavaScript, will the browsers underlying implementation use same data structure for handling data?
A Linked List for example - we will have a Node structure something like this
class Node {
constructor(data) {
this.data = data;
this.next = null;
}
}
Now let's say we set next to another node to create a linked list.
Does next really point to next Node using Linked List in underlying implementation? Linked List has O(1) to add a new node. Will this be actually same O(1) for JavaScript too when C++ (or any JavaScript engine) converts the implementation to system level ?
In JavaScript, identifiers (variables and arguments) and properties of objects are all essentially pointers to structures in memory (or on the heap).
Say that you have two Nodes like in your code, one linked to the other. One way to visualize the resulting structure is:
<Node>: memory address 16325
data: memory address 45642 (points to whatever argument was passed)
next: memory address 62563
<Node>: memory address 62563 (this is the same as the `next` above)
data: memory address 36425 (points to whatever argument was passed)
next: memory address 1 (points to null)
That's not exactly what happens, but it's close enough for the "underlying implementation" you're concerned about. If, in your JavaScript, you have a reference to one Node, and you link it to another by assigning to its next property, what's involved under the hood is simply taking the location of the linked object and changing the original object's next property to point to that location. And yes, that operation takes next to no processing power - it's definitely an O(1) process in any reasonable implementation, such as in browsers and Node.
The JavaScript engine does not have to re-analyze the structure from the ground up when a new property is created somewhere - it's all just objects and properties linking to other objects.
Related
As the title says, I am looking for the fastest way to wrap a native C++ object into a v8::Object. A sample of the code I currently have looks like:
Nan::Persistent<v8::Function> Vector3::constructor;
void Vector3::Initialise() // Static, Called once.
{
v8::Local<v8::FunctionTemplate> objectTemplate = Nan::New<v8::FunctionTemplate>();
objectTemplate->SetClassName(Nan::New("Vector3").ToLocalChecked());
objectTemplate->InstanceTemplate()->SetInternalFieldCount(1);
constructor.Reset(objectTemplate->GetFunction());
}
v8::Local<v8::Object> Vector3::WrapObject(double* components) // Static
{
v8::Local<v8::Function> constructorFunction = Nan::New(Vector3::constructor);
v8::Local<v8::Object> localObject = Nan::NewInstance(constructorFunction).ToLocalChecked();
Nan::Set(localObject, Nan::New("X").ToLocalChecked(), Nan::New<v8::Number>(components[0]));
Nan::Set(localObject, Nan::New("Y").ToLocalChecked(), Nan::New<v8::Number>(components[1]));
Nan::Set(localObject, Nan::New("Z").ToLocalChecked(), Nan::New<v8::Number>(components[2]));
return localObject;
}
The full code is a bit more complex as each vector is a property of some wrapped entity class.
From reading the many online tutorials and posts it seems like this is one of the most common ways to wrap an object, however, is there any faster way?
Profiling the code highlights Nan::Set as a major bottleneck. In particular, it seems Set internally calls two v8 methods:
v8::internal::Object::SetProperty
and
v8::internal::LookupIterator::PropertyOrElement
with each method taking up about 50% of the total Set function.
So my thoughts are:
Is there anyway to bypass the lookup function and call SetProperty directly?
Surely the lookup step is redundant here as I know ahead of time that I am setting a property and not an element.
Is there perhaps a way to define the properties on the object template in the Initialise method and then just set them using some integer index (while still retaining them as named properties and not elements) rather than the string name so the lookup is faster?
Is there any way to set multiple properties on an object at the same time?
Is there a better method entirely?
I've encountered this problem on several occasions, with objects created dynamically, regardless of whether they were created in QML or C++. The objects are deleted while still in use, causing hard crashes for no apparent reason. The objects are still referenced and parented to other objects all the way down to the root object, so I find it strange for QML to delete those objects while their refcount is still above zero.
So far the only solution I found was to create the objects in C++ and set the ownership to CPP explicitly, making it impossible to delete the objects from QML.
At first I assumed it may be an issue with parenting, since I was using QObject derived classes, and the QML method of dynamic instantiation passes an Item for a parent, whereas QtObject doesn't even come with a parent property - it is not exposed from QObject.
But then I tried with a Qobject derived which exposes and uses parenting and finally even tried using Item just for the sake of being sure that the objects are properly parented, and yet this behavior still persists.
Here is an example that produces this behavior, unfortunately I could not flatten it down to a single source because the deep nesting of Components breaks it:
// ObjMain.qml
Item {
property ListModel list : ListModel { }
Component.onCompleted: console.log("created " + this + " with parent " + parent)
Component.onDestruction: console.log("deleted " + this)
}
// Uimain.qml
Item {
id: main
width: childrenRect.width
height: childrenRect.height
property Item object
property bool expanded : true
Loader {
id: li
x: 50
y: 50
active: expanded && object && object.list.count
width: childrenRect.width
height: childrenRect.height
sourceComponent: listView
}
Component {
id: listView
ListView {
width: contentItem.childrenRect.width
height: contentItem.childrenRect.height
model: object.list
delegate: Item {
id: p
width: childrenRect.width
height: childrenRect.height
Component.onCompleted: Qt.createComponent("Uimain.qml").createObject(p, {"object" : o})
}
}
}
Rectangle {
width: 50
height: 50
color: "red"
MouseArea {
anchors.fill: parent
acceptedButtons: Qt.RightButton | Qt.LeftButton
onClicked: {
if (mouse.button == Qt.RightButton) {
expanded = !expanded
} else {
object.list.append({ "o" : Qt.createComponent("ObjMain.qml").createObject(object) })
}
}
}
}
}
// main.qml
Window {
visible: true
width: 1280
height: 720
ObjMain {
id: obj
}
Uimain {
object: obj
}
}
The example is a trivial object tree builder, with the left button adding a leaf to the node and the right button collapsing the node. All it takes to reproduce the bug is to create a node with depth of 3 and then collapse and expand the root node, upon which the console output shows:
qml: created ObjMain_QMLTYPE_0(0x1e15bb8) with parent QQuickRootItem(0x1e15ca8)
qml: created ObjMain_QMLTYPE_0(0x1e5afc8) with parent ObjMain_QMLTYPE_0(0x1e15bb8)
qml: created ObjMain_QMLTYPE_0(0x1e30f58) with parent ObjMain_QMLTYPE_0(0x1e5afc8)
qml: deleted ObjMain_QMLTYPE_0(0x1e30f58)
The object of the deepest node is deleted for no reason, even though it is parented to the parent node Item and referenced in the JS object in the list model. Attempting to add a new node to the deepest node crashes the program.
The behavior is consistent, regardless of the structure of the tree, only the second level of nodes survives, all deeper nodes are lost when the tree is collapsed.
The fault does not lie in the list model being used as storage, I've tested with a JS array and a QList and the objects are still lost. This example uses a list model merely to save the extra implementation of a C++ model. The sole remedy I found so far was to deny QML ownership of the objects altogether. Although this example produces rather consistent behavior, in production code the spontaneous deletions are often completely arbitrary.
In regard to the garbage collector - I've tested it before, and noticed it is quite liberal - creating and deleting objects a 100 MB of ram worth did not trigger the garbage collection to release that memory, and yet in this case only a few objects, worth a few hundred bytes are being hastily deleted.
According to the documentation, objects which have a parent or are referenced by JS should not be deleted, and in my case, both are valid:
The object is owned by JavaScript. When the object is returned to QML
as the return value of a method call, QML will track it and delete it
if there are no remaining JavaScript references to it and it has no
QObject::parent()
As mentioned in Filip's answer, this does not happen if the objects are created by a function which is not in an object that gets deleted, so it may have something to do with the vaguely mentioned JS state associated with QML objects, but I am essentially still in the dark as of why the deletion happens, so the question is effectively still unanswered.
Any ideas what causes this?
UPDATE: Nine months later still zero development on this critical bug. Meanwhile I discovered several additional scenarios where objects still in use are deleted, scenarios in which it doesn't matter where the object was created and the workaround to simply create the objects in the main qml file doesn't apply. The strangest part is the objects are not being destroyed when they are being "un-referenced" but as they are being "re-referenced". That is, they are not being destroyed when the visual objects referencing them are getting destroyed, but when they are being re-created.
The good news is that it is still possible to set the ownership to C++ even for objects, which are created in QML, so the flexibility of object creation in QML is not lost. There is the minor inconvenience to call a function to protect and delete every object, but at least you avoid the buggy lifetime management of QtQuick. Gotta love the "convenience" of QML though - being forced back to manual object lifetime management.
I've encountered this problem on several occasions, with objects created dynamically, regardless of whether they were created in QML or C++
Objects are only considered for garbage collection if they have JavaScriptOwnership set, which is the case if they are
Directly created by JavaScript/QML
Ownership is explicitly set to JavaScriptOwnership
The object is returned from a Q_INVOKABLE method and didn't have setObjectOwnership() called on it previously.
In all other cases objects are assumed to be owned by C++ and not considered for garbage collection.
At first I assumed it may be an issue with parenting, since I was using QObject derived classes, and the QML method of dynamic instantiation passes an Item for a parent, whereas QtObject doesn't even come with a parent property - it is not exposed from QObject.
The Qt object tree is completely different from the Qml object tree. QML only cares about its own object tree.
delegate: Item {
id: p
width: childrenRect.width
height: childrenRect.height
Component.onCompleted: Qt.createComponent("Uimain.qml").createObject(p, {"object" : o})
}
The combination of dynamically created objects in the onCompleted handler of a delegate is bound to lead to bugs.
When you collapse the tree, the delegates get destroyed, and with them all of their children, which includes your dynamically created objects. It doesn't matter if there are still live references to the children.
Essentially you've provided no stable backing store for the tree - it consists of a bunch of nested delegates which can go away at any time.
Now, there are some situations where QML owned objects are unexpectedly deleted: any C++ references don't count as a ref for the garbage collector; this includes Q_PROPERTYs. In this case, you can:
Set CppOwnership explicitly
Use QPointer<> to hold the reference to deal with objects going away.
Hold an explicit reference to the object in QML.
QML is not C++ in a way of managing memory. QML is intended to take care about allocating memory and releasing it. I think the problem you found is just the result of this.
If dynamic object creation goes too deep everything seems to be deleted. So it does not matter that your created objects were a part of the data - they are destroyed too.
Unfortunately my knowledge ends here.
One of the work arounds to the problem (proving my previous statement) is moving the creation of data structure out from the dynamic UI qml files:
Place object creating function for example in main.qml
function createNewObject(parentObject) {
parentObject.list.append({ "o" : Qt.createComponent("ObjMain.qml").createObject(parentObject) })
}
Use this function instead in your code:
// fragment of the Uimain.qml file
MouseArea {
anchors.fill: parent
acceptedButtons: Qt.RightButton | Qt.LeftButton
onClicked: {
if (mouse.button == Qt.RightButton) {
expanded = !expanded
} else {
createNewObject(object)
}
}
}
Create an array inside of a .js file and then create an instance of that array with var myArray = []; on the top-level of that .js. file.
Now you can reference any object that you append to myArray including ones that are created dynamically.
Javascript vars are not deleted by garbage collection as long as they remain defined, so if you define one as a global object then include that Javascript file in your qml document, it will remain as long as the main QML is in scope.
In a file called: backend.js
var tiles = [];
function create_square(new_square) {
var component = Qt.createComponent("qrc:///src_qml/src_game/Square.qml");
var sq = component.createObject(background, { "backend" : new_square });
sq.x = new_square.tile.x
sq.y = new_square.tile.y
sq.width = new_square.tile.width;
sq.height = new_square.tile.height;
tiles[game.board.getIndex(new_square.tile.row, new_square.tile.col)] = sq;
sq.visible = true;
}
EDIT :
Let me explain a little more clearly how this could apply to your particular tree example.
By using the line property Item object you are inadvertently forcing it to be a property of Item, which is treated differently in QML. Specifically, properties fall under a unique set of rules in terms of garbage collections, since the QML engine can simply start removing properties of any object to decrease the memory required to run.
Instead, at the top of your QML document, include this line:
import "./object_file.js" as object_file
Then in the file object_file.js , include this line:
var object_hash = [];
Now you can use object_hash any time to save your dynamically created components and prevent them from getting wiped out by referencing the
object_file.object_hash
object.
No need to go crazy changing ownership etc
Backbone provides options to select models from collections both by ID (a unique identifier attribute assigned to every model) and by index. Which of these is the fastest way to access items from a collection?
Cracking open Backbone.js, I can see that collection.get(id) (the select-by-ID function) uses a simple object-literal look-up and collection.at(index) (the select-by-index function) uses a simple array look-up.
from Backbone.js:
collection.get(id):
// Get a model from the set by id.
get: function(obj) {
if (obj == null) return void 0;
return this._byId[obj] || this._byId[obj.id] || this._byId[obj.cid];
}
collection.at(index):
// Get the model at the given index.
at: function(index) {
return this.models[index];
}
Because of this, the answer to this question should be directly tied to which is faster - array access or object literal access (in this case assuming that .get is used in its first iteration, where it's sent an ID, and not a model with an ID, or CID on it).
According to this JSPerf, select by index (using collection.at(index)) is generally faster than select by ID (using collection.get(id)) but by how much varies widely by browser. On Chrome and at least one of the versions of Firefox I tested, the difference is negligible, but still systematically in favor of select by index; in IE11, however, select by index is consistently (and almost exactly) twice as fast.
The moral of the story here is to use select by index whenever possible; hashed object retrieval is fast and convenient, but lacks the raw efficiency of indexed look-ups.
To access objects from a hash, Javascript engines must go through an additional look-up step, this in addition the overall complexity of objects make them a less-than-ideal choice for any script where performance is a consideration.
EDIT
Did a JSPerf. Ran it against Chrome as Chrome uses v8.
http://jsperf.com/passing-large-objects
It looks like passing a large object doesn't matter; the difference is negligible. However, lookup on an object at some point gets a lot slower.
INTRODUCTION:
I’m writing a 2D JavaScript game engine while following component based and data oriented (via typed arrays) design principles. It’s designed for use by a simulation based multiplayer netcode.
My performance concerns are for the master simulation that will be running on the server; I believe that client browsers will be more than fast enough. As of now, the server is NodeJS, so it would involve the V8 interpreter. However, I’m not ruling out a switch to other technologies like Vert.x, which I believe uses the Rhino interpreter.
THE QUESTION:
When does JavaScript access objects in memory?
More specifically, let’s say I have an object like so.
var data = {
a1 : new Float64Array(123456),
a2 : new Float64Array(123456),
…
a9001: new Float64Array(123456)
};
And now let’s say I pass it to this function like so.
var update = function(obj) {
for(var property in obj) {
if(obj.hasOwnProperty(property)) {
obj[property][0]++;
}
}
};
update(data);
At what point are the Float64 arrays accessed? Does it access it the moment I pass data into update, attempting to load all 9001 arrays into the memory cache and page faulting like crazy? Does it wait to load the arrays until the hasOwnProperty? Or obj[property]? Or obj[property][0]?
WHY I ASK:
I’m trying to follow the data oriented design principles of keeping stuff in contiguous blocks of memory. Depending on how JavaScript works with memory, I will have to change the interface and structure of the engine.
For example, if all the arrays in data are accessed the moment I pass it into update, then I have to make special data objects with as few arrays as possible to reduce page faulting. If however the arrays are only accessed at say obj[property], then I can pass a large data object with more arrays without any performance penalties, which simplifies a lot of things.
A big reason why I’m not sure of the answer is because JavaScript objects aren’t objects like in other languages. From some random reading here or there, I’ve heard stuff like JavaScript objects have their own internal class. I’ve also heard of things like JavaScript objects being hash tables so you incur a lookup time with every property that you access.
Then I’ve heard that the interpreters treat objects differently based on how large the object is; smaller ones are treated one way and larger ones another. So jsperf stuff may not be an accurate measure.
FURTHER:
Taking the example further, there’s the question of how JavaScript handles nested objects. For example:
var properties = {
a1 : {
a1 : {
…
a1 : {
}
}
},
a2 : {
a2 : {
…
a2 : {
}
}
},
…
a9001 : {
a9001 : {
…
a9001 : {
}
}
}
};
var doSomething = function() {
};
doSomething(properties);
If passing in properties to doSomething causes every sub object and their sub objects to get accessed, then that’s a performance hit. If however it just passes a reference to the properties object and only accesses the sub objects when the code calls them, then it’s not bad at all.
If I had access to Vectors, I’d make an entity system framework in a heartbeat and this wouldn’t really be a problem. If I had access to pointers, which I believe only accesses the object when the code converts the pointer, then I could try other things. But only having typed arrays at my disposal limits my options, so I end up agonizing over questions like this.
Thanks for the any insight you can provide. I really appreciate it.
To be honest, I'm not quite sure where to start with this question.
I'll describe the situation: I am in the process of making a level editor for an HTML5 game. The level editor is already functional - now I would like to save/load levels created with this editor.
Since this is all being done in Javascript (the level editor as well as the game), I was thinking of having the save simply convert the level to a JSON and the load, well... un-jsonify it.
The problem is - the level contains several types of objects (several different types of entities, several types of animation objects, etc...) Right now, every time I want to add an object to the game I have to write an unjsonify method specifically for that object and then modify the level object's unjsonify method so it can handle unjsonifying the newly defined type of object.
I can't simply use JSON.parse because that just returns an object with the same keys and values as the original had, but it is not actually an object of that class/prototype. My question is, then, is there a correct way to do this that does not require having to continuously modify the code every time I want to add a new type of object to the game?
I would create serialise/deserialise methods on each of your objects to put their state into JSON objects and recover it from them. Compound objects would recursively serialise/deserialise their children. To give an example:
function Player {
this.weapon = new Weapon();
}
Player.prototype.serialise = function () {
return {'type': 'Player', weapon: this.weapon.serialise()};
}
Player.deserialise = function(json_object) {
var player = new Player();
player.weapon = Weapon.deserialise(json.weapon);
return player;
}
Obviously in real code you would have checks to make sure you were getting the types of objects that you expect. Arrays and simple hash objects could be simply copied during serialisation/deserialisation though their children will often need to be recursed over.