I am building an API where some fields can be incremented.
After noticing data inconsistency in my MySQL database, I realized that the first version of my code was buggy:
Answer.incrementVotesCount = async (id) => {
// get a copy of the data
let answer = await getAnswer(id);
// update the copy of the data locally
answer.votesCount++;
// replace the persisted data with the updated copy of the original data
await Answer.updateAll({id}, answer);
};
Getting some data, updating it locally and persisting the modification can cause consistency problems when the route is used several times in a short period of time.
Such a situation would look something like this:
Caller A gets data. The persisted votesCount equals 14.
Caller B gets data. The persisted votesCount equals 14.
Caller A updates data. The persisted votesCount becomes 14 + 1.
At this point, the persisted votesCount equals 15, but Caller B's copy of it still equals 14.
Caller B updates data. The persisted votesCount becomes 14 + 1, whereas it should become 15 + 1.
2 increments have been performed, but the second one "crushed" the first one, since it increments an obsolete data.
I thought about using LoopBack3's native SQL functionality, but it seems like it is not fully reliable so I am unsure whether it's a good idea to use it (even though a query as simple as SET a = a + 1 should probably work correctly).
I also thought about using MySQL's triggers to perform some ACID compliant incrementing but I am unsure I can find a clean way to do this.
How do I increment some data without making it inconsistent?
I would take away the votesCount field to a separate hasOne relation, then I would make the Answer model strict='filter', so it would prevent saving data that does not really belong to the model. And when vote-up action would be taken I would increase the voteCount in that separate model, independently of the original Answer.
If you don't want to do it like this, you can try to check the original value in the before save hook, so you could get the latest value from the db and compare votesCount value from the database with the value in the model and update it accordingly.
Related
We are troubled by eventually occurring cursor not found exceptions for some Morphia Queries asList and I've found a hint on SO, that this might be quite memory consumptive.
Now I'd like to know a bit more about the background: can sombody explain (in English), what a Cursor (in MongoDB) actually is? Why can it kept open or be not found?
The documentation defines a cursor as:
A pointer to the result set of a query. Clients can iterate through a cursor to retrieve results. By default, cursors timeout after 10 minutes of inactivity
But this is not very telling. Maybe it could be helpful to define a batch for query results, because the documentation also states:
The MongoDB server returns the query results in batches. Batch size will not exceed the maximum BSON document size. For most queries, the first batch returns 101 documents or just enough documents to exceed 1 megabyte. Subsequent batch size is 4 megabytes. [...] For queries that include a sort operation without an index, the server must load all the documents in memory to perform the sort before returning any results.
Note: in our queries in question we don't use sort statements at all, but also no limit and offset.
Here's a comparison between toArray() and cursors after a find() in the Node.js MongoDB driver. Common code:
var MongoClient = require('mongodb').MongoClient,
assert = require('assert');
MongoClient.connect('mongodb://localhost:27017/crunchbase', function (err, db) {
assert.equal(err, null);
console.log('Successfully connected to MongoDB.');
const query = { category_code: "biotech" };
// toArray() vs. cursor code goes here
});
Here's the toArray() code that goes in the section above.
db.collection('companies').find(query).toArray(function (err, docs) {
assert.equal(err, null);
assert.notEqual(docs.length, 0);
docs.forEach(doc => {
console.log(`${doc.name} is a ${doc.category_code} company.`);
});
db.close();
});
Per the documentation,
The caller is responsible for making sure that there
is enough memory to store the results.
Here's the cursor-based approach, using the cursor.forEach() method:
const cursor = db.collection('companies').find(query);
cursor.forEach(
function (doc) {
console.log(`${doc.name} is a ${doc.category_code} company.`);
},
function (err) {
assert.equal(err, null);
return db.close();
}
);
});
With the forEach() approach, instead of fetching all data in memory, we're streaming the data to our application. find() creates a cursor immediately because it doesn't actually make a request to the database until we try to use some of the documents it will provide. The point of cursor is to describe our query. The second parameter to cursor.forEach shows what to do when an error occurs.
In the initial version of the above code, it was toArray() which forced the database call. It meant we needed ALL the documents and wanted them to be in an array.
Note that MongoDB returns data in batches. The image below shows requests from cursors (from application) to MongoDB:
forEach scales better than toArray because we can process documents as they come in until we reach the end. Contrast it with toArray - where we wait for ALL the documents to be retrieved and the entire array is built. This means we're not getting any advantage from the fact that the driver and the database system are working together to batch results to your application. Batching is meant to provide efficiency in terms of memory overhead and the execution time. Take advantage of it in your application, if you can.
I am by no mean a mongodb expert but I just want to add some observations from working in a medium sized mongo system for the last year. Also thanks to #xameeramir for the excellent walkthough about how cursors work in general.
The causes of a "cursor lost" exception may be several. One that I have noticed is explained in this answer.
The cursor lives server side. It is not distributed over a replica set but exists on the instance that is primary at the time of creation. This means that if another instance takes over as primary the cursor will be lost to the client. If the old primary is still up and around it may still be there but for no use. I guess it is garbaged collected away after a while. So if your mongo replica set is unstable or you have a shaky network in front of it you are out of luck when doing any long running queries.
If the full content of what the cursor wants to return does not fit in memory on the server the query may be very slow. RAM on your servers needs to be larger than the largest query you run.
All this can partly be avoided by designing better. For a use case with large long running queries you may be better of with several smaller database collections instead of a big one.
The collection's find method returns a cursor - this points to the set of documents (called as result set) that are matched to the query filter. The result set is the actual documents that are returned by the query, but this is on the database server.
To the client program, for example the mongo shell, you get a cursor. You can think the cursor is like an API or a program to work with the result set. The cursor has many methods which can be run to perform some actions on the result set. Some of the methods affect the result set data and some provide the status or info about the result set.
As the cursor maintains information about the result set, some information can change as you use the result set data by applying other cursor methods. You use these methods and information to suit your application, i.e., how and what you want to do with the queried data.
Working on the result set using the cursor and some of its commonly used methods and features from mongo shell:
The count() method returns the count of the number of documents in the result set, initially - as the result of the query. It is always constant at any point in the life of the cursor. This is information. This information remains same even after the cursor is closed or exhausted.
As you read documents from the result set, the result set gets exhausted. Once completely exhausted you cannot read any more. The hasNext() tells if there are any documents available to be read - returns a boolean true or false. The next() returns a document if available (you first check with hasNext, and then do a next). These two methods are commonly used to iterate over the result set data. Another iteration method is the forEach().
The data is retrieved from the server in batches - which has a default size. With the first batch you read the documents and when all it's documents are read, the following next() method retrieves the next batch of documents, etc., until all documents are read from the result set. This batch size can be configured and you can also get its status.
If you apply the toArray() method on the cursor, then all the remaining documents in the result set are loaded into the memory of your client computer and are available as a JavaScript array. And, the result set data is exhausted. The following hasNext method will return false, and the next will throw an error (once you exhaust the cursor, you cannot read data from it). This method loads all the result set data into your client's memory (the array). This can be memory consuming in case of large result sets.
The itcount() returns the count of remaining documents in the result set and exhausts the cursor.
There are cursor methods like isClosed(), isExhausted(), size() which give status information about the cursor and its underlying result set as you work with your data.
Those are the basic features of cursor and result set. There are many cursor methods, and you can try and see how they work and get a better understanding.
Reference:
mongo shell's cursor
methods
Cursor behavior with Aggregate
method
(the collection's aggregate method also returns a cursor)
Example usage in mongo shell:
Assume the test collection has 200 documents (run the commands in the same sequence).
var cur = db.test.find( { } ).limit(25) creates a result set with 25
documents only.
But, cur.count() will show 200, which is the actual count of
documents by the query's filter.
hasNext() will return true.
next() will return a document.
itcount() will return 24 (and exhausts the cursor).
itcount() again will return 0.
cur.count() will still show 200.
This error also comes when you have a large set of data and are doing batch processing on that data and each batch takes more time, totalling that time be exceeded the default cursor live time.
Then you need to change that default time to tell mongo that will not expire this cursor until processing is done.
Do check No TimeOut Documentation
A cursor is an object returned by calling db.collection.find() and which enables iterating through documents (NoSQL equivalent of a SQL "row") of a MongoDB collection (NoSQL equivalent of "table").
In case your cluster is stable and no members where down or changing state, the most posible reason for not finding the cursor is this:
Default idle cursor timeout is 10min , but in the versions >= 3.6 the cursor is also associated with session which is having default session timeout 30min , so even you set the cursor to not expire with the option noCursorTimeout() you are still limited by the session timeout of 30min. To avoid your cursor to be killed by the session timeout you will need to perioducally check in your code and execute sessionRefresh command:
db.adminCommand({"refreshSessions" : [sessionId]})
to extend the session with another 30min so your cursor to not be killed if you do something with the data before fetching the next batch...
check the docs here for detail how to do it:
https://docs.mongodb.com/manual/reference/method/cursor.noCursorTimeout/
In my post processor I have the following code:
if (i as Integer < protocolsArray.length){
protocolsArray[i as Integer] = "${requestProtocolId}" ;
}
and it worked, I put the "requestProtocolId" that I got via JsonExtractor in the position of "i", position of i == 0
but the number of virtual users defined in this http request is greater than one.
so it sends the request again and saves the new "requestProtocolId" again in position zero, overwriting the other protocol,I understand that with the new request it starts all over again, taking the initial values assigned to the variables again, but I've already tried incrementing the i (i ++) and returning the new array with the zero position filled in:
vars.putObject("protocolsArray", protocolsArray);
but it always returns the value set before the htpp request, is there a way to change that?
If I changed and put an iteration controller and it was a group of users but in iteration controller "5", it would be like the same user would send it five times, right?
I wanted to simulate different users, but always keep the "requestProtocolId" value saved in the array positions because I'm going to use it in another request.
In its current state your question doesn't make a lot of sense.
JMeter Variables are local to the thread (virtual user)
If the user starts new iteration - the variable will be still there, if this is not something you want/expect - use vars.remove() function to ensure that the variable is always "fresh"
Don't inline JMeter Functions or Variables in scripts, in case of Groovy (the recommended scripting option) only first occurrence will be cached and used during other iterations, see JSR223 Sampler documentation for more information
i know very well how the set method work. but I have a doubt about its use to update a node.
I want to know if when I save an object with a new field (so the same values as before + a new field), is all the fields of the object uploaded again or is only the new field loaded?
I see that in the database the unchanged fields are not lit green during writing, this makes me think that
1) all object is sent to the database and after the upload the database simply ignores the fields without modifications.
2) the unchanged fields are not even uploaded into the database (they simply stay in the client) but only new fields are sent.
in the second case in a context of large objects there would be a considerable saving of bandwidth
const object = {
name: 'tower10',
type: 'building',
rooms: 10
};
await db.ref('object/1').set(object);
object.extra = 'extra content';
object.extra1 = 'extra content 1';
await db.ref('object/1').set(object);
The entire object is sent with every call to set(). Children whose values are not change don't count as updates for listeners (as you noticed in the console when their values don't flash). If you know only certain values are going to change, you cloud only update with those values and not send the entire thing. But the object you're showing here is rather small, and I don't think optimizing this small object will matter very much.
I'm looking to update a list of orders (and statuses) real-time on a webpage. The orders in the (MySQL) database are updated asynchronously through other processes (PHP).
I'm familiar with the mechanics of pushing data to pages (polling, event-source). This is not about that.
What I'm struggling with is figuring out exactly what data to push for each user without
needlessly updating list entities that don't need to be
not missing an update.
My table does have a DateTime column last_update_date that I update when there are any changes to the order. I know MySQL doesn't really have any event triggers that can trigger other code.
Ideas so far:
In my JS I could track the time of the last request and on every subsequent request, ask for data since that time. This doesn't work because JS time will most likely not match server MySQL time.
The same could probably done storing the server time in the user session. I feel like this would probably work most of the time, but depending on the timing of the DB update and the requests, changes could be missed since the DB only stores a DateTime with a precision of 1 second.
I'm sure there's a more atomic way to do this, I am just drawing a blank though. What are suitable design patterns for this?
You are correct that you must poll your database for changes, and that MySQL can't push changes to other applications.
The trick is to use server time throughout for your polling. Use a table to keep track of polling. For example, suppose your users have user_id values. Then make a poll table consisting of
user_id INT primary key
polldate DATETIME
Then, when you poll do this sequence.
First make sure your user has an entry in the poll table showing a long-ago polldate. (INSERT IGNORE doesn't overwrite any existing row in the table.)
SET #userid := <<your user's id>>;
INSERT IGNORE INTO poll (user_id, polldate) VALUES (#userid, '1970-01-01')
Then when you poll, do this sequence of operations.
Lock the poll row for the user:
BEGIN TRANSACTION;
SELECT polldate INTO #polldate
FROM poll
WHERE user_id = #userid
FOR UPDATE;
Retrieve the updated rows you need; those since the last update.
SELECT t.whatever, t.whatelse
FROM transaction_table t
JOIN poll p ON t.user_id = p.user_id
WHERE user_id = #userid
AND t.last_update_date > p.polldate;
Update the poll table's polldate column
UPDATE poll p
SET p.polldate = IFNULL(MAX(t.last_update_date), p.polldate)
FROM transaction_table t
JOIN poll_p ON t.user_id = p.user_id
WHERE user_id = #userid
AND t.last_update_date > p.polldate;
And commit the transaction.
COMMIT;
Every time you use this sequence you'll get the items from your transaction table that have been updated since the preceding poll. If there are no items, the polldate won't change. And, it's all in server time.
You need the transaction in case some other client updates a transaction table row between your SELECT and your UPDATE queries.
The solution O.Jones provided would work for making tracking updates atomic, though where it fails is if the following scenario occurs all within one second:
An order update is written to the table (update 1)
A poll action occurs
An order update is written to the table (update 2)
In this scenario, the next poll action will either miss update 2, or will duplicate update 1, depending on if you use > or >= in your query. This is not the fault of the code, it's a limitation of the MySql datetime type having only 1 second resolution. This could be somewhat mitigated with MySql v8 as it has Fractional Seconds Support though this still would not guarantee atomicity.
The solution I ended up using was creating a order_changelog table
CREATE TABLE 'NewTable' (
'id' int NULL AUTO_INCREMENT ,
'order_id' int NULL ,
'update_date' datetime NULL ,
PRIMARY KEY ('id')
);
This table is updated any time a change to an order is made essentially numerating every update.
For the client side, the server stores the last ID from order_changelog that was sent in the session. Every time the client polls, I get all rows from order_changelog that have an ID greater than the ID stored in the session and join the orders to it.
$last_id = $_SESSION['last_update_id'];
$sql = "SELECT o.*, c.id as update_id
FROM order_changelog c
LEFT JOIN orders o ON c.order_id = o.id
WHERE c.id > $last_id
GROUP BY o.id
ORDER BY order_date";
I now am guaranteed to have all the orders since last poll, with no duplicates, and I don't have to track individual clients.
Below is a snipet of code that I am having trouble with. The purpose is to check duplicate entries in the database and return "h" with a boolean if true or false. For testing purposes I am returning a true boolean for "h" but by the time the alert(duplicate_count); line gets executed the duplicate_count is still 0. Even though the alert for a +1 gets executed.
To me it seems like the function updateUserFields is taking longer to execute so it's taking longer to finish before getting to the alert.
Any ideas or suggestions? Thanks!
var duplicate_count = 0
for (var i = 0; i < skill_id.length; i++) {
function updateUserFields(h) {
if(h) {
duplicate_count++;
alert("count +1");
} else {
alert("none found");
}
}
var g = new cfc_mentoring_find_mentor();
g.setCallbackHandler(updateUserFields);
g.is_relationship_duplicate(resource_id, mentee_id, section_id[i], skill_id[i], active_ind,table);
};
alert(duplicate_count);
There is no reason whatsoever to use client-side JavaScript/jQuery to remove duplicates from your database. Security concerns aside (and there are a lot of those), there is a much easier way to make sure the entries in your database are unique: use SQL.
SQL is capable of expressing the requirement that there be no duplicates in a table column, and the database engine will enforce that for you, never letting you insert a duplicate entry in the first place. The syntax varies very slightly by database engine, but whenever you create the table you can specify that a column must be unique.
Let's use SQLite as our example database engine. The relevant part of your problem is right now probably expressed with tables something like this:
CREATE TABLE Person(
id INTEGER PRIMARY KEY ASC,
-- Other fields here
);
CREATE TABLE MentorRelationship(
id INTEGER PRIMARY KEY ASC,
mentorID INTEGER,
menteeID INTEGER,
FOREIGN KEY (mentorID) REFERENCES Person(id),
FOREIGN KEY (menteeID) REFERENCES Person(id)
);
However, you can make enforce uniqueness i.e. require that any (mentorID, menteeID) pair is unique, by changing the pair (mentorID, menteeID) to be the primary key. This works because you are only allowed one copy of each primary key. Then, the MentorRelationship table becomes
CREATE TABLE MentorRelationship(
mentorID INTEGER,
menteeID INTEGER,
PRIMARY KEY (mentorID, menteeID),
FOREIGN KEY (mentorID) REFERENCES Person(id),
FOREIGN KEY (menteeID) REFERENCES Person(id)
);
EDIT: As per the comment, alerting the user to duplicates but not actually removing them
This is still much better with SQL than with JavaScript. When you do this in JavaScript, you read one database row at a time, send it over the network, wait for it to come to your page, process it, throw it away, and then request the next one. With SQL, all the hard work is done by the database engine, and you don't lose time by transferring unnecessary data over the network. Using the first set of table definitions above, you could write
SELECT mentorID, menteeID
FROM MentorRelationship
GROUP BY mentorID, menteeID
HAVING COUNT(*) > 1;
which will return all the (mentorID, menteeID) pairs that occur more than once.
Once you have a query like this working on the server (and are also pulling out all the information you want to show to the user, which is presumably more than just a pair of IDs), you need to send this over the network to the user's web browser. Essentially, on the server side you map a URL to return this information in some convenient form (JSON, XML, etc.), and on the client side you read this information by contacting that URL with an AJAX call (see jQuery's website for some code examples), and then display that information to the user. No need to write in JavaScript what a database engine will execute orders of magnitude faster.
EDIT 2: As per the second comment, checking whether an item is already in the database
Almost everything I said in the first edit applies, except for two changes: the schema and the query. The schema should become the second of the two schemas I posted, since you don't want the database engine to allow duplicates. Also, the query should be simply
SELECT COUNT(*) > 0
FROM MentorRelationship
WHERE mentorID = #mentorID AND menteeID = #menteeID;
where #mentorID and #menteeID are the items that the user selected, and are inserted into the query by a query builder library and not by string concatenation. Then, the server will get a true value if the item is already in the database, and a false value otherwise. The server can send that back to the client via AJAX as before, and the client (that's your JavaScript page) can alert the user if the item is already in the database.