Firebase avoid same node override/updated by multiple clients (web) - javascript

i am using firebase to make checkout system where multiple users of same store can override/update a node at same time.
Scenario/Steps:
User1: Enter item code in system (and he can enter multiple)
User2: Enter item code in system (//)
User3: Release that item code from system
User4 ... and so on.
Code:
"-KhBgsi8HwT5BloV0Srt" : {
"lastUpdated" : 1504285854767,
"methodName" : "ITEM_ENTERED",
"payLoad": "{'Id':1, 'ItemName': 'Apples'}"
}
In above code, whenever any user enters i override above node with methodeName and payLoad of item. ANd whenever user releases item from the system I again update same node by overriding like that:
"-KhBgsi8HwT5BloV0Srt" : {
"lastUpdated" : 1504285854767,
"methodName" : "ITEM_RELEASED",
"payLoad": "{'Id':1, 'ItemName': 'Apples'}"
}
All users are connected to same firebase node which can be override by above users at same time. So if all users do operation at same time the node gets the last one saved in the node. How can i avoid that and make sure all users get same data not the last one happened on node.
At some time when everything is happening quickly by users then above node gets mixed up with ITEM_ENTERED/ITEM_RELEASED methods and clients get out of sync.
I hope i made my point clear. I just need a right direction to fix this concurrency writes to same node.
Any help is appreciated.
Thanks

Comments are getting long, but this seems to be a valid solution to your problem
Just push another node, and make your listener returns the last pushed node
Rather than using child_changed, use child_added

Related

matrix-js-sdk setup and configuration

I am having some issues trying to connect to a matrix server using the matrix-js-sdk in a react app.
I have provided a simple code example below, and made sure that credentials are valid (login works) and that the environment variable containing the URL for the matrix client is set. I have signed into element in a browser and created two rooms for testing purposes, and was expecting these two rooms would be returned from matrixClient.getRooms(). However, this simply returns an empty array. With some further testing it seems like the asynchronous functions provided for fetching room, member and group ID's only, works as expected.
According to https://matrix.org/docs/guides/usage-of-the-matrix-js-sd these should be valid steps for setting up the matrix-js-sdk, however the sync is never executed either.
const matrixClient = sdk.createClient(
process.env.REACT_APP_MATRIX_CLIENT_URL!
);
await matrixClient.long("m.login.password", credentials);
matrixClient.once('sync', () => {
debugger; // Never hit
}
for (const room of matrixClient.getRooms()) {
debugger; // Never hit
}
I did manage to use the roomId's returned from await matrixClient.roomInitialSync(roomId, limit, callback), however this lead me to another issue where I can't figure out how to decrypt messages, as the events containing the messages sent in the room seems to be of type 'm.room.encrypted' instead of 'm.room.message'.
Does anyone have any good examples of working implementations for the matrix-js-sdk, or any other good resources for properly understanding how to put this all together? I need to be able to load rooms, persons, messages etc. and display these respectively in a ReactJS application.
It turns out I simply forgot to run startClient on the matrix client, resulting in it not fetching any data.

Firebase concurrency issue: how to prevent 2 users from getting the same Game Key?

DATABASE:
SITUATION:
My website sells keys for a game.
A key is a randomly generated string of 20 characters whose uniqueness is guaranteed (not created by me).
When someone buys a key, NTWKeysLeft is read to find it's first element. That element is then copied, deleted from NTWKeysLeft and pasted to NTWUsedKeys.
Said key is then displayed on the buyer's screen.
PROBLEM:
How can I prevent the following problem :
1) 2 users buy the game at the exact same time.
2) They both get the same key read from NTWKeysLeft (first element in list)
3) And thus both get the same key
I know about Firebase Transactions already. I am looking for a pseudo-code/code answer that will point me in the right direction.
CURRENT CODE:
Would something like this work ? Can I put a transaction inside another transaction ?
var keyRef = admin.database().ref("NTWKeysLeft");
keyRef.limitToFirst(1).transaction(function (keySnapshot) {
keySnapshot.forEach(function(childKeySnapshot) {
// Key is read here:
var key = childKeySnapshot.val();
// How can I prevent two concurrent read requests from reading the same key ? Using a transaction to change a boolean could only happen after the read happens since I first need to read in order to know which key boolean to change.
var selectedKeyRef = admin.database().ref("NTWKeysLeft/"+key);
var usedKeyRef = admin.database().ref("NTWUsedKeys/"+key);
var keysLeftRef = admin.database().ref("keysLeft");
selectedKeyRef.remove();
usedKeyRef.set(true);
keysLeftRef.transaction(function (keysLeft) {
if (!keysLeft) {
keysLeft = 0;
}
keysLeft = keysLeft - 1;
return keysLeft;
});
res.render("bought", {key:key});
});
});
Just to be clear: keyRef.limitToFirst(1).transaction(function (keySnapshot) { does not work, but I would like to accomplish something to that effect.
Most depends on how you generate the keys, since that determines how likely collisions are. I recommend reading about Firebase's push IDs to get an idea how unique those are and compare that to your keys. If you can't statistically guarantee uniqueness of your keys or if statistical uniqueness isn't good enough, you'll have to use transactions to prevent conflicting updates.
The OP has changed the question a bit so, i will update the answer as follows: I will leave the bottom part about transactions as it was and will put the new update on top.
I can see two ways to proceed:
1) handle the lock system on your own and use JavaScript callbacks or other mechanisms for preventing simultaneous access to a portion of the code.
or
2) Use transactions/fireBase. On this case, i don't have the setup ready to share code other than sample/pseudo code provided at the bottom of this page.
With respect to option 1 above:
I have coded a use-case and put in on plunker. It uses JavaScript callbacks to queue users as they try to access the part of the code under lock.
I. user comes in and he is placed in queue
II. It then calls the callback function which pops users as
first come first out bases. I have the keys on top of the page to
be shared by the functions.
I have a button click event to this and when you click the button twice quickly, you will see keys assigned and they're different keys.
To read this code, click on the script.js file on the left and read starting from the bottom of the page where it calls the functions.
Here is the sample code in plunker. After clicking it, click on Run on top of the page and then click on the button on right hand side. Alert will pop up to show which key is given (note, there are two calls back to back to show two users coming in at same time)
https://plnkr.co/edit/GVFfvqQrlLeMaKlo5FCj?p=info
The fireBase transactions:
Use fireBase transactions to prevent concurrent read/write issues - below is the transaction() method signiture
transaction(dataToBeWritten, onComplete, applyLocally) returns fireBase.promise containing {
committed: boolean, nullable fireBase.database.snapshot }
Note, transaction needs writeOperation as first parameter and in your case looks like you’re removing a key upon success! hence the following function to be called in place of write
Try this pseudo code :
//first, get reference to your db
var selectedKeyRef = admin.database().ref("NTWKeysLeft/"+key);
// needed by transaction as first parameter
function writeOperation() {
selectedKeyRef.remove();
}
selectedKeyRef.transaction(function(writeOperation) , function(error,
committed, snapshot) {
  if (error) {
    console.log('Transaction failed abnormally!', error);
  } else if (!committed) {
    console.log('We aborted the transaction (because xyz).’);
  } else {
    console.log(‘keyRemoved!’);
  }
  console.log(“showKey: ", snapshot.val());
}); // end of the transaction() method call
Docs + to see parameters/return objects of the transaction() method see:
https://firebase.google.com/docs/reference/js/firebase.database.Reference#transaction
In the Docs.... If another client writes to the location before your new value is successfully written, your update function is called again with the new current value, and the write is retried.
https://firebase.google.com/docs/database/web/read-and-write#save_data_as_transactions
I don't think the problem you're worried about can happen. JavaScript, including Node, is single-threaded and can only do one thing at a time. If you had a big server infrastructure with more than one server running this code, then it would be possible, but for a single Node program, there's no problem.
Since none of the previous answers discussing the scope of Transactions worked out, I would suggest a different workaround.
Is it possible to trigger the unique code generation when someone buys a code? If yes, you could generate the unique string if the "buy" button is clicked, display the ID and save the ID to your database.
Later the user enters the key in your game, which checks if the ID is written in your database. This might probably also save a bit of data, since you do not need to keep track of the unique IDs before they get bought and you will also not run out of IDs, since they will always get generated when necessary.

PouchDB delete document upon successful replication to CouchDB

I am trying to implement a process as described below:
Create a sale_transaction document in the device.
Put the sale_transaction document in Pouch.
Since there's a live replication between Pouch & Couch, let the sale_transaction document flow to Couch.
Upon successful replication of the sale_transaction document to Couch, delete the document in Pouch.
Don't let the deleted sale_transaction document in Pouch, to flow through Couch.
Currently, I have implemented a two-way sync from both databases, where I'm filtering each document that is coming from Couch to Pouch, and vice versa.
For the replication from Couch to Pouch, I didn't want to let sale_transaction documents to go through, since I could just get these documents from Couch.
PouchDb.replicate(remoteDb, localDb, {
// Replicate from Couch to Pouch
live: true,
retry: true,
filter: (doc) => {
return doc.doc_type!=="sale_transaction";
}
})
While for the replication from Pouch to Couch, I put in a filter not to let deleted sale_transaction documents to go through.
PouchDb.replicate(localDb, remoteDb, {
// Replicate from Pouch to Couch
live: true,
retry: true,
filter: (doc) => {
if(doc.doc_type==="sale_transaction" && doc._deleted) {
// These are deleted transactions which I dont want to replicate to Couch
return false;
}
return true;
}
}).on("change", (change) => {
// Handle change
replicateOutChangeHandler(change)
});
I also implemented a change handler to delete the sale_transaction documents in Pouch, after being written to Couch.
function replicateOutChangeHandler(change) {
for(let doc of change.docs) {
if(doc.doc_type==="sale_transaction" && !doc._deleted) {
localDb.upsert(doc._id, function(prevDoc) {
if(!prevDoc._deleted) {
prevDoc._deleted = true;
}
return prevDoc;
}).then((res)=>{
console.log("Deleted Document After Replication",res);
}).catch((err)=>{
console.error("Deleted Document After Replication (ERROR): ",err);
})
}
}
}
The flow of the data seems to be working at first, but when I get the sale_transaction document from Couch, then do some editing, I would then have to repeat the process of writing the document in Pouch, then let it flow to Couch, then delete it in Pouch. But, after some editing with the same document, the document in Couch, has also been deleted.
I am fairly new with Pouch & Couch, specifically in NoSQL, and was wondering if I'm doing something wrong in the process.
For a situation like the one you've described above, I'd suggest tweaking your approach as follows:
Create a PouchDB database as a replication target from CouchDB, but treat this database as a read-only mirror of the CouchDB database, applying whatever transforms you need in order to strip certain document types from the local store. For the sake of this example, let's call this database mirror. The mirror database only gets updated one-way, from the canonical CouchDB database via transform replication.
Create a separate PouchDB database to store all your sales transactions. For the sake our this example, let's call this database user-data.
When the user creates a new sale transaction, this document is written to user-data. Listen for changes on user-data, and when a document is created, use the change handler to create and write the document directly to CouchDB.
At this point, CouchDB is recieving sales transactions from user-data, but your transform replication is preventing them from polluting mirror. You could leave it at that, in which case user-data will have local copies of all sales transactions. On logout, you can just delete the user-data database. Alternatively, you could add some more complex logic in the change handler to delete the document once CouchDB has recieved it.
If you really wanted to get fancy, you could do something even more elaborate. Leave the sales transactions in user-data after it's written to CouchDB, and in your transform replication from CouchDB to mirror, look for these newly-created sales transactions documents. Instead of removing them, just strip them of anything but their _id and _rev fields, and use these as 'receipts'. When one of these IDs match an ID in user-data, that document can be safely deleted.
Whichever method you choose, I suggest you think about your local PouchDB's _changes feed as a worker queue, instead of putting all of this elaborate logic in replication filters. The methods above should all survive offline cases without introducing conflicts, and recover nicely when connectivity is restored. I'd recommend the last solution, though it might be a bit more work than the others. Hope this helps.
Maybe additional field for delete - thus marking the record for deletion.
Then periodic routine running on both Pouch and Couch that scan for marked for deletion records and delete them.

Listening for changes on Firebase

I have the following structure on my firebase database:
{
"gateways_pr" :{
"gateway_1":{
"avisos" : {
"00":{
"aviso_1" : "0",
"aviso_2" : "0"
},
"01":{
"aviso_1" : "0",
"aviso_2" : "0"
}
}
}
}
}
I have a small demo javascript webPage that is listening to child_change in gateways_pr/gateway_1/avisos:
var gateWayRef = firebase.database().ref("gateways_pr/gateway_1/avisos");
gateWayRef.on('child_changed',function(data){
console.log("CHILD_CHANGE");
console.log(data.val());
var datos = data.val();
console.log(datos);
});
When I changed for example gateway_1/avisos/00/aviso_1 and set it to 2 I can track the change with chrome developer tools looking into the frames of the websocket and I receive:
{"t":"d","d":{"b":{"p":"gateways_pr/gateway_1/avisos/00/aviso_1","d":"2"},"a":"d"}}
So I´m only receiving the change made.
The problem is that, on my code, data.val() has the following value:
{aviso_1: "2", aviso_2: "0"}
Calling data.ref.path.toString() returns :
/gateways_pr/gateway_1/avisos/00
That means that the Firebase API shows you everything below the children that had it´s property changed (00 in this case).
it´s there anyway of knowing what was the change (in this case should return "aviso_1")?
The only workaround I´ve found so far is making my code to listen on every child. In this case I should listen to gateways_pr/gateway_1/avisos/00 and gateways_pr/gateway_1/avisos/01, but if I add new entries to "avisos" I should start listening to them too, and at the end my program could end listening to thousand of referecenes.
When you attach a child_changed listener to gateways_pr/gateway_1/avisos, you're asking the Firebase client to inform you when something changes in a child under that level. If something changes on a lower level, the Firebase client will raise the child_changed event on the level that you registered for. There is no way to change this behavior.
When you have the need to know precisely what changed under the listener, it typically means that you've modeled the data wrong for your use-case.
For example: if you want to listen for changes across the entire hierarchy, you should model a list of changes across the entire hierarchy and then attach a listener to that list. This is one of the many reasons that the Firebase documentation recommends keeping flat data structures.

How to add new fields to existing users

I'm having a big deal - the meteor app I've been developing the last weeks is finally online. But, for an update, I need to add a field to my users profile.
I thought that walling a methods with the following code would work :
updateUsrs_ResetHelps: function(){
if(Meteor.users.update({}, {
$set: {
'profile.helps': []
}
}))
console.log("All users profile updated : helps reset");
else
throw new Meteor.Error(500, 'Error 500: updateUsrs_ResetHelps',
'the update couldn\'t be performed');
}
The problem is that my users have the classic Meteor.accounts document, whith emails, _id, services, profile, etc... but, in the profile, they don't have a .helps fields. I need to create it.
For the future users, I've modified the accounts creation function to add this fields when they sign up, but for the 200 users I already got signed up, I do really need a solution.
EDIT : Might it be because of the selector in the update ? Is a simple {} selector valid to update all the users / documents of the collection at once ?
From the Mongo documentation (http://docs.mongodb.org/manual/reference/method/db.collection.update/):
By default, the update() method updates a single document. Set the
Multi Parameter to update all documents that match the query criteria.
If you've already taken care of adding the field for new users and you just need to fix the old ones, why not just do it one time directly in the database?
Run meteor to start your application, then meteor mongo to connect to the database. Then run an update on records where the field doesn't already exist. Something like:
db.users.update({"profile.helps": {"$exists": false}}, {"$set": {"profile.helps": []}}, {multi:true})
The Mongo documentation specifies the multi parameter as:
Optional. If set to true, updates multiple documents that meet the
query criteria. If set to false, updates one document. The default
value is false.

Categories

Resources