I am having an extremely bizarre problem. I have a Backbone collection, and I am using the where method to find models in the collection that match a certain attribute. My problem is the inconsistency of the results.
I have a joinedGoalList which keeps track of goals that a user has joined. Let's say that this collection contains two goals with IDs of 1 and 3. When a user accesses /goals/3, a message should display saying that the user has joined the goal
I am having a problem where I am accessing /goals/3, and half the time the message displays, and the other half of the time, the message does not display.
The odd thing is that this problem only happens on my remote server and not on my local host.
In my code, I query the joinedGoalList for an ID of 3, and if it matches, I know that the matches array has to be greater than 0, so I render the message showing that the user has joined the goal.
Here is the code (joinedGoalList is a Backbone collection:
console.log(joinedGoalList);
var matches = joinedGoalList.where({id: this.model.get("id")});
console.log(matches);
console.log(matches.length);
if (matches.length > 0) {
console.log("the matches length is > 0");
this.renderLeaveGoal();
} else {
console.log("the matches length is 0");
this.renderJoinGoal();
}
Here are the results of console.log(joinedGoalList), here are the results(they are consistent):
child
_byCid: Object
_byId: Object
_callbacks: Object
length: 2
models: Array[2]
__proto__: ctor
As you can see, the length is 2. One of the objects has an ID of 1 and the other object has an ID of 3. This is consistent throughout the page loads.
The inconsistency occurs when I do a match on the array for an object with an ID of 3. Some page loads find the match while other page loads do not find the match.
The results of console.log(matches.length) are either 0 or 1 on my remote server, yet on my localhost, the results are always 1.
I'm pretty sure that the sequence of events goes like this:
You call fetch on the collection to load your data from the server.
You call console.log(joinedGoalList), this is asynchronous in some browsers.
You call joinedGoalList.where and find an empty collection.
The fetch call from 1 returns and populates the collection.
The console.log call from 2 executes and prints out the populated collection, this call will have a reference to joinedGoalList and that reference will now be pointing at a populated collection.
When you do this locally, the AJAX fetch in 4 returns quite quickly so step 4 occurs before 3 and everything behaves the way you'e expecting it to.
You have a couple options here:
fetch has a success callback:
The options hash takes success and error callbacks which will be passed (collection, response) as arguments.
So you could use the success callback to delay whatever is calling where until the server has responded and the collection is populated.
fetch resets the collection:
When the model data returns from the server, the collection will reset.
and reset will
replace a collection with a new list of models (or attribute hashes), triggering a single "reset" event at the end.
So you could listen for the "reset" event and use that event to trigger whatever is calling where.
Related
We are troubled by eventually occurring cursor not found exceptions for some Morphia Queries asList and I've found a hint on SO, that this might be quite memory consumptive.
Now I'd like to know a bit more about the background: can sombody explain (in English), what a Cursor (in MongoDB) actually is? Why can it kept open or be not found?
The documentation defines a cursor as:
A pointer to the result set of a query. Clients can iterate through a cursor to retrieve results. By default, cursors timeout after 10 minutes of inactivity
But this is not very telling. Maybe it could be helpful to define a batch for query results, because the documentation also states:
The MongoDB server returns the query results in batches. Batch size will not exceed the maximum BSON document size. For most queries, the first batch returns 101 documents or just enough documents to exceed 1 megabyte. Subsequent batch size is 4 megabytes. [...] For queries that include a sort operation without an index, the server must load all the documents in memory to perform the sort before returning any results.
Note: in our queries in question we don't use sort statements at all, but also no limit and offset.
Here's a comparison between toArray() and cursors after a find() in the Node.js MongoDB driver. Common code:
var MongoClient = require('mongodb').MongoClient,
assert = require('assert');
MongoClient.connect('mongodb://localhost:27017/crunchbase', function (err, db) {
assert.equal(err, null);
console.log('Successfully connected to MongoDB.');
const query = { category_code: "biotech" };
// toArray() vs. cursor code goes here
});
Here's the toArray() code that goes in the section above.
db.collection('companies').find(query).toArray(function (err, docs) {
assert.equal(err, null);
assert.notEqual(docs.length, 0);
docs.forEach(doc => {
console.log(`${doc.name} is a ${doc.category_code} company.`);
});
db.close();
});
Per the documentation,
The caller is responsible for making sure that there
is enough memory to store the results.
Here's the cursor-based approach, using the cursor.forEach() method:
const cursor = db.collection('companies').find(query);
cursor.forEach(
function (doc) {
console.log(`${doc.name} is a ${doc.category_code} company.`);
},
function (err) {
assert.equal(err, null);
return db.close();
}
);
});
With the forEach() approach, instead of fetching all data in memory, we're streaming the data to our application. find() creates a cursor immediately because it doesn't actually make a request to the database until we try to use some of the documents it will provide. The point of cursor is to describe our query. The second parameter to cursor.forEach shows what to do when an error occurs.
In the initial version of the above code, it was toArray() which forced the database call. It meant we needed ALL the documents and wanted them to be in an array.
Note that MongoDB returns data in batches. The image below shows requests from cursors (from application) to MongoDB:
forEach scales better than toArray because we can process documents as they come in until we reach the end. Contrast it with toArray - where we wait for ALL the documents to be retrieved and the entire array is built. This means we're not getting any advantage from the fact that the driver and the database system are working together to batch results to your application. Batching is meant to provide efficiency in terms of memory overhead and the execution time. Take advantage of it in your application, if you can.
I am by no mean a mongodb expert but I just want to add some observations from working in a medium sized mongo system for the last year. Also thanks to #xameeramir for the excellent walkthough about how cursors work in general.
The causes of a "cursor lost" exception may be several. One that I have noticed is explained in this answer.
The cursor lives server side. It is not distributed over a replica set but exists on the instance that is primary at the time of creation. This means that if another instance takes over as primary the cursor will be lost to the client. If the old primary is still up and around it may still be there but for no use. I guess it is garbaged collected away after a while. So if your mongo replica set is unstable or you have a shaky network in front of it you are out of luck when doing any long running queries.
If the full content of what the cursor wants to return does not fit in memory on the server the query may be very slow. RAM on your servers needs to be larger than the largest query you run.
All this can partly be avoided by designing better. For a use case with large long running queries you may be better of with several smaller database collections instead of a big one.
The collection's find method returns a cursor - this points to the set of documents (called as result set) that are matched to the query filter. The result set is the actual documents that are returned by the query, but this is on the database server.
To the client program, for example the mongo shell, you get a cursor. You can think the cursor is like an API or a program to work with the result set. The cursor has many methods which can be run to perform some actions on the result set. Some of the methods affect the result set data and some provide the status or info about the result set.
As the cursor maintains information about the result set, some information can change as you use the result set data by applying other cursor methods. You use these methods and information to suit your application, i.e., how and what you want to do with the queried data.
Working on the result set using the cursor and some of its commonly used methods and features from mongo shell:
The count() method returns the count of the number of documents in the result set, initially - as the result of the query. It is always constant at any point in the life of the cursor. This is information. This information remains same even after the cursor is closed or exhausted.
As you read documents from the result set, the result set gets exhausted. Once completely exhausted you cannot read any more. The hasNext() tells if there are any documents available to be read - returns a boolean true or false. The next() returns a document if available (you first check with hasNext, and then do a next). These two methods are commonly used to iterate over the result set data. Another iteration method is the forEach().
The data is retrieved from the server in batches - which has a default size. With the first batch you read the documents and when all it's documents are read, the following next() method retrieves the next batch of documents, etc., until all documents are read from the result set. This batch size can be configured and you can also get its status.
If you apply the toArray() method on the cursor, then all the remaining documents in the result set are loaded into the memory of your client computer and are available as a JavaScript array. And, the result set data is exhausted. The following hasNext method will return false, and the next will throw an error (once you exhaust the cursor, you cannot read data from it). This method loads all the result set data into your client's memory (the array). This can be memory consuming in case of large result sets.
The itcount() returns the count of remaining documents in the result set and exhausts the cursor.
There are cursor methods like isClosed(), isExhausted(), size() which give status information about the cursor and its underlying result set as you work with your data.
Those are the basic features of cursor and result set. There are many cursor methods, and you can try and see how they work and get a better understanding.
Reference:
mongo shell's cursor
methods
Cursor behavior with Aggregate
method
(the collection's aggregate method also returns a cursor)
Example usage in mongo shell:
Assume the test collection has 200 documents (run the commands in the same sequence).
var cur = db.test.find( { } ).limit(25) creates a result set with 25
documents only.
But, cur.count() will show 200, which is the actual count of
documents by the query's filter.
hasNext() will return true.
next() will return a document.
itcount() will return 24 (and exhausts the cursor).
itcount() again will return 0.
cur.count() will still show 200.
This error also comes when you have a large set of data and are doing batch processing on that data and each batch takes more time, totalling that time be exceeded the default cursor live time.
Then you need to change that default time to tell mongo that will not expire this cursor until processing is done.
Do check No TimeOut Documentation
A cursor is an object returned by calling db.collection.find() and which enables iterating through documents (NoSQL equivalent of a SQL "row") of a MongoDB collection (NoSQL equivalent of "table").
In case your cluster is stable and no members where down or changing state, the most posible reason for not finding the cursor is this:
Default idle cursor timeout is 10min , but in the versions >= 3.6 the cursor is also associated with session which is having default session timeout 30min , so even you set the cursor to not expire with the option noCursorTimeout() you are still limited by the session timeout of 30min. To avoid your cursor to be killed by the session timeout you will need to perioducally check in your code and execute sessionRefresh command:
db.adminCommand({"refreshSessions" : [sessionId]})
to extend the session with another 30min so your cursor to not be killed if you do something with the data before fetching the next batch...
check the docs here for detail how to do it:
https://docs.mongodb.com/manual/reference/method/cursor.noCursorTimeout/
In my post processor I have the following code:
if (i as Integer < protocolsArray.length){
protocolsArray[i as Integer] = "${requestProtocolId}" ;
}
and it worked, I put the "requestProtocolId" that I got via JsonExtractor in the position of "i", position of i == 0
but the number of virtual users defined in this http request is greater than one.
so it sends the request again and saves the new "requestProtocolId" again in position zero, overwriting the other protocol,I understand that with the new request it starts all over again, taking the initial values assigned to the variables again, but I've already tried incrementing the i (i ++) and returning the new array with the zero position filled in:
vars.putObject("protocolsArray", protocolsArray);
but it always returns the value set before the htpp request, is there a way to change that?
If I changed and put an iteration controller and it was a group of users but in iteration controller "5", it would be like the same user would send it five times, right?
I wanted to simulate different users, but always keep the "requestProtocolId" value saved in the array positions because I'm going to use it in another request.
In its current state your question doesn't make a lot of sense.
JMeter Variables are local to the thread (virtual user)
If the user starts new iteration - the variable will be still there, if this is not something you want/expect - use vars.remove() function to ensure that the variable is always "fresh"
Don't inline JMeter Functions or Variables in scripts, in case of Groovy (the recommended scripting option) only first occurrence will be cached and used during other iterations, see JSR223 Sampler documentation for more information
I've hit a snag and I'm hoping you guys can help:
I have a basic HTTP get in Angular - I've done it 100 times at this point. My JSON response on the server, confirmed, is formed like so:
[{"date":"07\/24\/2017","time_start":"02:00 PM","time_end":"05:00 PM","name":"Adult Ministries Registration","room":"","speaker":"none","speaker_writein":"","xhead":"yes"}]
You see the final property is "xhead" and it's a string "yes" or "no" - which I'm using to determine when to show titles that break up events by day (grouping them by time under a date in a schedule list).
The problem for me is that when this data comes into the Angular app, "xhead" is undefined. I'm doing a simple console.log on the response data and it shows as undefined.
Even more odd is that if I change these values to 1 or 0, they all come into Angular as either 1 or 0 - not differentiated as they should be for each item in the collection (response).
When i visit the endpoints in the browser, the data is as it should be.
Help!!! I'm losing my mind.
It appears as though using ng-if="response.xhead=yes" in the view changes the value of the item. That was the problem.
The ngIf directive removes or recreates a portion of the DOM tree based on an {expression}. If the expression assigned to ngIf evaluates to a false value then the element is removed from the DOM, otherwise a clone of the element is reinserted into the DOM.
I have an FS Collection in Meteor called "MyUploads". I will be performing some functionality on the files uploaded into the Collection, and then additional files will be created and subsequently added within MyUploads. I have created an event, in which this will take place, called #parseUploads. Within the event, and prior to the addition of the subsequent files added to MyUploads, I have created a variable:
var previousCount = fileCount;
which is responsible for storing the original count of documents that the user had added to the Collection. Then, the parsing function will perform on each of these documents, and add the newly parsed documents to the collection.
My question is: How do I loop through the Collection from the first document up through the previousCount's value document?
In other words, if the previousCount has a value of 3 (meaning, that the user had uploaded 3 documents), then after the parsing functionality has been performed, there will be 3 subsequent documents added to the collection. I then would like to know how I can loop through the Collection and delete only the first 3 documents, while leaving the 3 subsequent documents remaining in the Collection.
I would recommend adding a boolean field to the collection to act as a flag to denote parsed items. Once an item is parsed, you can update it.
Then you can remove items from the collection based on the presence of that flag.
MyCollection.remove({stale: true});
Hope that helps,
Elliott
I'm sitting here for a while now wondering why I'm losing an array parameter on a function call when calling it a second time.
The script I'm working on is mapped after CouchDB/PouchDB and stores items as JSON strings in multiple storages (including local storage). Parameters are:
_id id of the item
_rev revision string (version), counter and hash
_content whatever content
_revisions array of all prior hashes and current counter
_revs_info all previous revisions of this item with status
I'm currently trying a PUT operation which by default updates an existing document. As I'm working with multiple storages, I also have a PUT SYNC, which "copy&pastes" versions of a document from one storage to another (with the goal having every version available on every storage). I'm also keeping a separate file with a document tree, which stores all the version hashes. This tree file is updated on SYNCs using the _revs_info supplied with the PUT.
My problem is sequential SYNC PUTs. The first one works, on the second I'm losing the _revs_info parameter. And I don't know why...
Here is my first call (from my QUnit module), which works fine:
o.jio.put({
"content":'a_new_version',
"_id":'myDoc',
"_rev":"4-b5bb2f1657ac5ac270c14b2335e51ef1ffccc0a7259e14bce46380d6c446eb89",
"_revs_info":[
{"rev":"4-b5bb2f1657ac5ac270c14b2335e51ef1ffccc0a7259e14bce46380d6c446eb89","status":"available"},
{"rev":"3-a9dac9ff5c8e1b2fce58e5397e9b6a8de729d5c6eff8f26a7b71df6348986123","status":"deleted"},
{"rev":fake_rev_1,"status":"deleted"},
{"rev":fake_rev_0,"status":"deleted"}
],
"_revisions":{
"start":4,
"ids":[
"b5bb2f1657ac5ac270c14b2335e51ef1ffccc0a7259e14bce46380d6c446eb89",
"a9dac9ff5c8e1b2fce58e5397e9b6a8de729d5c6eff8f26a7b71df6348986123",
fake_id_1,
fake_id_0
]}
},
function(err, response) {
// run tests
});
However, when I call the same function a second time:
o.jio.put({
"content":'a_deleted_version',
"_id":'myDoc',
"_rev":"3-05210795b6aa8cb5e1e7f021960d233cf963f1052b1a41777ca1a2aff8fd4b61",
"_revs_info":[ {"rev":"3-05210795b6aa8cb5e1e7f021960d233cf963f1052b1a41777ca1a2aff8fd4b61","status":"deleted"},{"rev":"2-67ac10df5b7e2582f2ea2344b01c68d461f44b98fef2c5cba5073cc3bdb5a844","status":"deleted"},{"rev":fake_rev_2,"status":"deleted"}],
"_revisions":{
"start":3,
"ids":[
"05210795b6aa8cb5e1e7f021960d233cf963f1052b1a41777ca1a2aff8fd4b61",
"67ac10df5b7e2582f2ea2344b01c68d461f44b98fef2c5cba5073cc3bdb5a844",
fake_id_2
]}
},
function(err, response) {
// run tests
});
My script fails, because the _revs_info array does not include anything. All other parameters and all random parameters I'm adding are transferred. If I add a string or object instead of an array they also safely make it into my script alive.
Array however... does not pass...
Question:
I have been sitting on this for a few hours trying to nail down points I have not found, but I'm pretty clueless. So does anyone know of reasons, why arrays might lose their content, when passing them on as parameters in Javascript?
Thanks!
EDIT:
I added a regular PUT after my first SYNC-PUT, which passed fine (without _revs_info being defined).
It's completely possible for a JavaScript function to mutate an array passed in. Consider this example:
function removeAll(a) { a.splice(0); }
var arr = [1, 2, 3];
removeAll(arr);
console.log(arr); // empty array