Node js - Request Error transaction was deadlocked - javascript

I'm having problems when i insert several data using promise, sometimes it works but other times give me this error:
And my code is this:
return Promise.all([
Promise.all(createBistamp),
Promise.all(createSlstamp),
listOfResults,
i
]).then(function(listOfResults2) {
for(var j=0; j<resultArticle.length; j++) {
if(arm === 'Arm-1') {
}
if(arm === 'Arm-1-11') {
}
}
if(arm === 'Arm-1') {
console.log("PROMISE ARM-1");
return Promise.all([insertBi,insertBi2,insertSl]).then(function (insertEnd) {
res.send("true");
}).catch(function(err) {
console.log(err);
});
}
if(arm === 'Arm-1-11') {
console.log("PROMISE ARM-1-11");
return Promise.all([insertBi,insertBi2,insertSl,insertSlSaida]).then(function (insertEnd) {
res.send("true");
}).catch(function(err) {
console.log(err);
});
}
}).catch(function(err) {
console.log(err);
});
I remove the code line inside ifs and for but it was inserts in database.
Example of insert:
var insertBi2 = request.query("INSERT INTO bi2 (bi2stamp,alvstamp1,identificacao1,szzstamp1,zona1,bostamp,ousrinis,ousrdata,ousrhora,usrinis,usrdata,usrhora)"+
"VALUES ('"+bistamp+"','AB16083056009,454383576','2','Adm13010764745,450449475','1','"+bostamp+"','WWW','"+data+"','"+time+"','WWW','"+data+"','"+time+"')");
Full Code:
http://pastebin.com/DTjtXvDt
This is my structure and i don't know if i'm working well with promises.
Thank you

I have also faced this problem recently.
error: RequestError: Transaction (Process ID 72) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Solution -
There was not a single index on the table. So, I created a non-clustered unique index on the unique identifier column.
I was surprised when this solution worked
There was a single update operation in the code and no select operation. So, it made me curious to do some research. I came across lock granularity mechanism for locking resources. In my case, locking has to be at row level instead of page level.
Note:
For clustered tables, the data pages are stored at the leaf level of the (clustered) index structure and are therefore locked with index key locks instead of row locks.
Further Reading
https://www.sqlshack.com/locking-sql-server/

If you are inserting data or updating data in a loop, then it's better to make all queries in the loop and store it and then execute it all at once in a single transaction. Will save yourself with a lot of issues

Related

What is the best way to handle two async calls that must both pass and make an "irreversible" change?

I am currently wondering about this issue, I have a team to which I want to add a user (so write a new user to the database for that team) and I also want to increase the amount of users that team needs to pay for (I use stripe subscriptions).
async handleNewUser(user, teamId){
await addUserToTeamInDatabase(user, teamId)
await incrementSubscriberQuantityInStripe(teamId)
}
The problem is, which one do I do first? I recently ran into an issue where users were being added but the subscriber count was not increasing. However, if I reverse them and increment first and then write to database and something goes wrong in this last part, the client pays more but does not get a new member added. One possible way of approaching this is with try catch:
handleNewUser(user, teamId) {
let userAddedToDatabase = false
let userAddedInStripe = false
try {
await addUserToTeamInDatabase(user, teamId)
userAddedToDatabase = true
await incrementSubscriberQuantityInStripe(teamId)
userAddedToStripe = true
} catch (error) {
if (userAddedToDatabase && !userAddedInStripe) {
await removeUserFromTeamInDatabase()
}
}
}
So I'm writing the new user to the database and then making a call to the stripe API.
Is there a better way to approach this because it feels clumsy. Also, is there a pattern to address this problem or a name for it?
I'm using Firebase realtime database.
Thanks everyone!
What you want to perform is a transaction. In databases a transaction is a group of operations that is successful if all of its operations are successful. If at least one operation fails, no changes are made (all the other operations are cancelled or rolled back).
And Realtime Database supports transactions! Check the documentation
If both operations would be in the same database you'd normally bundle them in a transaction and the DB will revert to initial state if one of them fails. In your case you have operations in different external systems (DB & Stripe) so you'll have to implement the transactional logic yourself.
You could simplify your example by checking where the error comes from in the catch clause. Then you can get rid of the flags. Something like this:
handleNewUser(user, teamId) {
try {
await addUserToTeamInDatabase(user, teamId)
await incrementSubscriberQuantityInStripe(teamId)
} catch (error) {
// If we fail to increment subscriber in Stripe,
// cancel transaction by removing user from DB
if (instanceof error StripeError) {
await removeUserFromTeamInDatabase(user, teamId)
}
// Re-throw error upstream
throw error;
}
}
I use instanceof but you you change the conditional logic to fit your program.

How to query a bigquery view from bigquery APIs

I have a view in bigquery which contains fields from different datasets and tables, Now I would like to query this view through my google script. What is correct way of doing that.
Actually Currently I have created a separate table in bigquery and querying the table instead of view but I need view as view will get updated when the dependency tables will be updated.
If I am using the table, It is working fine but in case of view I am getting below error:
Exception: Response Code: 404. Message: Not Found.
Bigquery api's to return the result of query.
try {
var job = BigQuery.newJob();
var config = BigQuery.newJobConfiguration();
var queryConfig = BigQuery.newJobConfigurationQuery();
queryConfig.setQuery(sql);
queryConfig.setMaximumBillingTier(5);
config.setQuery(queryConfig);
job.setConfiguration(config);
var jobid = BigQuery.Jobs.insert(job, projectNumber).jobReference;
queryResults = BigQuery.Jobs.getQueryResults(projectNumber, jobid.jobId);
}
catch (err) {
Logger.log(err);
Browser.msgBox(err);
return;
}
// Check on status of the Query Job : MONTHLY
while (queryResults.getJobComplete() == false) {
try {
queryResults = BigQuery.Jobs.getQueryResults(projectNumber, queryResults.jobId);
//queryResults = BigQuery.Jobs.getQueryResults(projectNumber, job.id);
}
catch (err) {
Logger.log(err);
Browser.msgBox(err);
return;
}
}
return queryResults;
If I comment out my first try clause and use below one
try {
var queryRequest = BigQuery.newQueryRequest();
queryRequest.setQuery(sql).setTimeoutMs(100000);
queryResults = BigQuery.Jobs.query(queryRequest, projectNumber);
//Browser.msgBox(queryResults);
}
catch (err) {
Logger.log(err);
Browser.msgBox(err);
return;
}
then It starts giving me
Exception: Query exceeded resource limits for tier 1. Tier 3 or higher required.
It looks like the 'configuration.query.maximumBillingTier' property isn't getting set when you are inserting the job. The method of using 'JobConfigurationQuery' and other classes seems to have been abandoned as there is no mention of them in the current docs, and I needed to resort to using the Wayback Machine to find them.
The most recently available document from 11/12/2013 only defines getters and setters for a few configuration properties, and 'maximumBillingTier' isn't one of them.
I'd suggest manually setting the request properties as is done in the usage examples from the current documentation, rather than relying on the "old" object constructors, as they seem to only be left around for compatibility purposes and are incomplete.
The reason a view would require a higher billing tier versus a table, by the way, is because a view is only a logical table and the queries which define the view must be re-executed when the view itself is queried.

IndexedDB and large amount of inserts on Angular app

I'm struggling with amounts of 20-50k JSON object response from server which I should insert into our indexeddb datastore.
Response is repeated with foreach and every single row is added with each. Calls with response less than 10k rows are working fine and inserted within a minute or so. But when the amounts get larger, the database goes unresponsive after a while and returns this error message
"db Error err=transaction aborted for unknown reason"
I'm using a Dexie wrapper for the database and an angular wrapper for dexie called ngDexie.
var deferred = $q.defer();
var progress = 0;
// make the call
$http({
method: 'GET',
headers: headers,
url: '/Program.API/api/items/getitems/' + user
}).success(function (response) {
// parse response
var items = angular.fromJson(response);
// loop each item
angular.forEach(items, function (item) {
// insert into db
ngDexie.put('stuff', item).then(function () {
progress++;
$ionicLoading.show({
content: 'Loading',
animation: 'fade-in',
template: 'Inserting items to db: ' + progress
+ '/' + items.length,
showBackdrop: true,
maxWidth: 200,
showDelay: 0
});
if (progress == items.length) {
setTimeout(function () {
$ionicLoading.hide();
}, 500);
deferred.resolve(items);
}
});
});
}).error(function (error) {
$log('something went wrong');
$ionicLoading.hide();
});
return deferred.promise;
Do I have the wrong approach with dealing with the whole data in one chunk? Could there be better alternatives? This whole procedure is only done once when the user opens up the site. All help is greatly appreciated. The target device is tablets running Android with Chrome.
Since you are getting a unknown error, there is something going wrong with I/O. My guess is the db underneath has troubles handling the amout of data. May try to split up in batches with a maximum of 10k each.
A transaction can fail for reasons not tied to a particular IDBRequest. For example due to IO errors when committing the transaction, or due to running into a quota limit where the implementation can't tie exceeding the quota to a partcular request. In this case the implementation MUST run the steps for aborting a transaction using the transaction as transaction and the appropriate error type as error. For example if quota was exceeded then QuotaExceededError should be used as error, and if an IO error happened, UnknownError should be used as error.
you can find this in the specs
An other possibility, do you have any indexes defined on the objectstore? Because for every index you have, that index needs to be maintained with every insert.
If you insert many new records i would suggest using add. This was added for performance reasons. See the documentation here:
https://github.com/FlussoBV/NgDexie/wiki/ngDexie.add
I had problems with massive bulk insert (100.000 - 200.000 records). I've solved all my IndexedDB performance problems using bulkPut() from Dexie library. It has this important feature:
Dexie has a kick-ass performance. It's bulk methods take advantage of
a not well known feature in indexedDB that makes it possible to store
stuff without listening to every onsuccess event. This speeds up the
performance to a maximum.
Dexie: https://github.com/dfahlander/Dexie.js
BulkPut() -> http://dexie.org/docs/Table/Table.bulkPut()

atomic 'read-modify-write' in javascript

I'm developing an online store app, and using Parse as the back-end. The count of each item in my store is limited. Here is a high-level description of what my processOrder function does:
find the items users want to buy from database
check whether the remaining count of each item is enough
if step 2 succeeds, update remaining count
check if remaining count becomes negative, if it is, revert remaining count to the old value
Ideally, the above steps should be executed exclusively. I learned that Javascript is a single-threaded and event-based, so here are my questions:
no way in Javascript to put the above steps in a critical section, right?
assume only 3 items are left, and two users try to order 2 of them respectively. The remaining count will end up as -1 for one of the users, so remaining count needs to be reverted to 1 in this case. Imagine another user tries to order 1 item when the remaining count is -1, he will fail although he should be allowed to order. How do I solve this problem?
Following is my code:
Parse.Cloud.define("processOrder", function(request, response) {
Parse.Cloud.useMasterKey();
var orderDetails = {'apple':2, 'pear':3};
var query = new Parse.Query("Product");
query.containedIn("name", ['apple', 'pear']);
query.find().then(function(results) {
// check if any dish is out of stock or not
_.each(results, function(item) {
var remaining = item.get("remaining");
var required = orderDetails[item.get("name")];
if (remaining < required)
return Parse.Promise.error(name + " is out of stock");
});
return results;
}).then(function(results) {
// make sure the remaining count does not become negative
var promises = [];
_.each(results, function(item) {
item.increment("remaining", -orderDetails[item.get("name")]);
var single_promise = item.save().then(function(savedItem) {
if (savedItem.get("remaining") < 0) {
savedItem.increment("remaining", orderDetails[savedItem.get("name")]);
return savedItem.save().then(function(revertedItem) {
return Parse.Promise.error(savedItem.get("name") + " is out of stock");
}, function(error){
return Parse.Promise.error("Failed to revert order");
});
}
}, function(error) {
return Parse.Promise.error("Failed to update database");
});
promises.push(single_promise);
});
return Parse.Promise.when(promises);
}).then(function() {
// order placed successfully
response.success();
}, function(error) {
response.error(error);
});
});
no way in Javascript to put the above steps in a critical section, right?
See, here is the amazing part. In JavaScript everything runs in a critical section. There is no preemption and multiprocessing is cooperative. If your code started running there is simply no way any other code can run before yours completes.
That is, unless your code is done executing.
The problem is, you're doing IO, and IO in JavaScript yields back to the event loop before actually happening kind of like in blocking code. So when you create and run a query you don't actually continue running right away (that's what your callback/promise code is about).
Ideally, the above steps should be executed exclusively.
Sadly that's not a JavaScript problem, that's a host environment problem in this case Parse. This is because you have to explicitly yield control to the other code when you use their APIs (through callbacks and promises) and it is up to them to solve it.
Lucky for you, parse has atomic counters. From the API docs:
To help with storing counter-type data, Parse provides methods that atomically increment (or decrement) any number field. So, the same update can be rewritten as.
gameScore.increment("score");
gameScore.save();
There are also atomic array operations which you can use here. Since you can do step 3 atomically, you can guarantee that the counter represents the actual inventory.

Insert an array of documents into a model

Here's the relevant code:
var Results = mongoose.model('Results', resultsSchema);
var results_array = [];
_.each(matches, function(match) {
var results = new Results({
id: match.match_id,
... // more attributes
});
results_array.push(results);
});
callback(results_array);
});
}
], function(results_array) {
results_array.insert(function(err) {
// error handling
Naturally, I get a No method found for the results_array. However I'm not sure what else to call the method on.
In other functions I'm passing through the equivalent of the results variable here, which is a mongoose object and has the insert method available.
How can I insert an array of documents here?
** Edit **
function(results_array) {
async.eachLimit(results_array, 20, function(result, callback) {
result.save(function(err) {
callback(err);
});
}, function(err) {
if (err) {
if (err.code == 11000) {
return res.status(409);
}
return next(err);
}
res.status(200).end();
});
});
So what's happening:
When I clear the collection, this works fine.
However when I resend this request I never get a response.
This is happening because I have my schema to not allow duplicates that are coming in from the JSON response. So when I resend the request, it gets the same data as the first request, and thus responds with an error. This is what I believe status code 409 deals with.
Is there a typo somewhere in my implementation?
Edit 2
Error code coming out:
{ [MongoError: insertDocument :: caused by :: 11000 E11000 duplicate key error index:
test.results.$_id_ dup key: { : 1931559 }]
name: 'MongoError',
code: 11000,
err: 'insertDocument :: caused by :: 11000 E11000 duplicate key error index:
test.results.$_id_ dup key: { : 1931559 }' }
So this is as expected.
Mongo is responding with a 11000 error, complaining that this is a duplicate key.
Edit 3
if (err.code == 11000) {
return res.status(409).end();
}
This seems to have fixed the problem. Is this a band-aid fix though?
You seem to be trying to insert various documents at once here. So you actually have a few options.
Firstly, there is no .insert() method in mongoose as this is replaced with other wrappers such as .save() and .create(). The most basic process here is to just call "save" on each document you have just created. Also employing the async library here to implement some flow control so everything just doesn't queue up:
async.eachLimit(results_array,20,function(result,callback) {
result.save(function(err) {
callback(err)
});
},function(err) {
// process when complete or on error
});
Another thing here is that .create() can just take a list of objects as it's arguments and simply inserts each one as the document is created:
Results.create(results_array,function(err) {
});
That would actually be with "raw" objects though as they are essentially all cast as a mongooose document first. You can ask for the documents back as additional arguments in the callback signature, but constructing that is likely overkill.
Either way those shake, the "async" form will process those in parallel and the "create" form will be in sequence, but they are both effectively issuing one "insert" to the database for each document that is created.
For true Bulk functionality you presently need to address the underlying driver methods, and the best place is with the Bulk Operations API:
mongoose.connection.on("open",function(err,conn) {
var bulk = Results.collection.initializeUnorderedBulkOp();
var count = 0;
async.eachSeries(results_array,function(result,callback) {
bulk.insert(result);
count++;
if ( count % 1000 == 0 ) {
bulk.execute(function(err,response) {
// maybe check response
bulk = Results.collection.initializeUnorderedBulkOp();
callback(err);
});
} else {
callback();
}
},function(err) {
// called when done
// Check if there are still writes queued
if ( count % 1000 != 0 )
bulk.execute(function(err,response) {
// maybe check response
});
});
});
Again the array here is raw objects rather than those cast as a mongoose document. There is no validation or other mongoose schema logic implemented here as this is just a basic driver method and does not know about such things.
While the array is processed in series, the above shows that a write operation will only actually be sent to the server once every 1000 entries processed or when the end is reached. So this truly does send everything to the server at once.
Unordered operations means that the err would normally not be set but rather the "response" document would contain any errors that might have occurred. If you want this to fail on the first error then it would be .initializeOrderedBulkOp() instead.
The care to take here is that you must be sure a connection is open before accessing these methods in this way. Mongoose looks after the connection with it's own methods so where a method such as .save() is reached in your code before the actual connection is made to the database it is "queued" in a sense awaiting this event.
So either make sure that some other "mongoose" operation has completed first or otherwise ensure that your application logic works within such a case where the connection is sure to be made. Simulated in this example by placing within the "connection open" event.
It depends on what you really want to do. Each case has it's uses, with of course the last being the fastest possible way to do this as there are limited "write" and "return result" conversations going back and forth with the server.

Categories

Resources