Firestore Offline Cache & Promises - javascript

This question is a follow-up on Firestore offline cache. I've read the offline cache documentation but am confused on one point.
One commenter answered in the prior question (~ year ago):
You Android code that interact with the database will be the same
whether you're connected or not, since the SDK simply works the same."
In the API documentation for DocumentReference's set method, I just noticed that it says:
Returns
non-null Promise containing void A promise that resolves once
the data has been successfully written to the backend. (Note that it
won't resolve while you're offline).
Emphasis mine. Wouldn't this bit in the documentation suggest that the code won't behave the same, or am I missing something? If I'm waiting on the .set() to resolve before allowing some user interaction, it sounds from this bit like I need to adjust the code for an offline case differently than I would normally.
The CollectionReference's add method worries me a bit more. It doesn't have exactly the same note but says (emphasis mine):
A Promise that resolves with a DocumentReference pointing to the newly created document after it has been written to the backend.
That is a little more vague as not sure if "backend" in this case is a superset of "cache" and "server" or if it's meant to denote only the server. If this one doesn't resolve, that would mean that the following wouldn't work, correct?
return new Promise((resolve, reject) => {
let ref = firestore.collection(path)
ref.add(data)
.then(doc => {
resolve({ id: doc.id, data: data })
})
...
})
Meaning, the .add() would not resolve, .then() would not run, and I wouldn't have access to the id of the document that was just added. I hope I'm just misunderstanding something and that my code can continue to function as-is both online and offline.

You have two of concerns here, which are not really related. I'll explain both of them separately.
For the most part, developers don't typically care if the promise from a document update actually resolves or not. It's almost always "fire and forget". What would an app gain to know that the update hit the server, as long as the app behaves the same way regardless? The local cache has been updated, and all future queries will show that the document has been updated, even if the update hasn't been synchronized with the server yet.
The primary exception to this is transactions. Transactions require that the server be online, because round trips need to be made between the client and server in order to ensure that the update was atomic. Transactions simply don't work offline. If you need to know if a transaction worked, you need to be online. Unlike normal document writes, transactions don't persist in local cache. If the app is killed before the transaction completes on the server, the transaction is lost.
Your second concern is about newly added documents where the id of the document isn't defined at the time of the update. It's true that add() returns a promise that only resolves when the new document exists on the server. You can't know the id of the document until the promise delivers you the DocumentReference of the new document.
If this behavior doesn't work for you, you can generate a new id for a document by simply calling doc() with no arguments instead of add(). doc() immediately returns the DocumentReference of the new (future) document that hasn't been written (until you choose to write it). In both the case of doc() and add(), these DocumentReference objects contain unique ids generated on the client. The difference is that with doc(), you can use the id immediately, because you get a DocumentReference immediately. With add(), you can't, because the DocumentReference isn't provided until the promise resolves. If you need that new document id right now, even while offline, use doc() instead of add(). You can then use the returned DocumentReference to create the document offline, stored in the local cache, and synchronized later. The update will then return a promise that resolves when the document is actually written.

Related

Why can't we get a returned value from sendTransaction() run on a smart contract?

All discussions on this mention that it's impossible to get a returned value from sendTransaction() run on a contract function, where the contract state is being changed. I don't understand why the returned value can't be recorded in the transaction log on the blockchain, similarly to events, and so then it could be retrieved on the transaction confirmation:
web3.eth.sendTransaction(...)
.on('confirmation', function(1, receipt){ ... // retrieving value returned by smart contract function here })
Logs are made for describing the events emitted from the contract - which is the current solution for getting data from transactions - , so the return data can't go in there.
Including a return_data in the receipt, though, has been discussed and apparently forgotten. EIP758, has the following sollution:
EIP 658 originally proposed adding return data to transaction receipts. However, return data is not charged for (as it is not stored on the blockchain), so adding it to transaction receipts could result in DoS and spam opportunities. Instead, a simple Boolean status field was added to transaction receipts. This modified version of EIP 658 was included in the Byzantium hard fork. While the status field is useful, applications often need the return data as well.
The primary advantage of using the strategy outlined here is efficiency: no extra data needs to be stored on the blockchain, and minimal extra computational load is imposed on nodes. Since light clients have the current state, they can compute and send return data notifications without contacting a server. Although after-the-fact lookups of the return value would not be supported, this is consistent with the conventional use of return data, which are only accessible to the caller when the function returns, and are not stored for later use.
And this go client pull request, which didn't go through because the best solution would be, instead, an ethereum hard fork - even though we had some since then and it didn't happen.

Ember websocket/new object collision workaround clobbering hasMany promise

I have an Ember 3.8 app that uses Ember Data 3.8 and a websocket. We use the standard Ember Data API to preload data in Routes before processing, and for creating and deleting records. We also push all relevant updates to the database back to the browser through a websocket, where we pushPayload it into the store.
This pattern doesn't seem too uncommon - I've seen people discuss it elsewhere. A common problem with this pattern, however, is how to handle the collision when you are creating a record and the websocket sends down a copy of this new record before your POST resolves. Normally a pushPayload would simply update the record already existing with the payload's ID. For a record mid-save, the pushPayload creates a new record. Then when the POST returns and Ember Data tries assigning the ID to the pending record, it discovers there was already a record with that ID (from the websocket) and throws an exception.
To get around this we wrapped createRecord in some code that detects collisions and unloads the existing record with the ID so the pending record can be the true instance.
This worked well for a while, but now we're getting some odd behavior.
One of the models we create this way is the target of an async hasMany from another model.
Example:
model-parent: {
children: hasMany('model-child', {async: true}),
}
When we have a collision during a save of 'model-child', the PromiseManyArray returned by get('children') is destroyed, but the content of the PromiseManyArray is not. Further, the parent.get keeps returning this destroyed PromiseManyArray. Later we try binding a computed to it, which throws an exception trying to bind to the destroyed PromiseManyArray.
eg:
> modelParent1.get('children').isDestroyed
true
> modelParent1.get('children').content.isDestroyed
false
children: computed('modelParent1.children.[]')
^^^ blows up at controller setup time when trying to add a chainwatcher to the destroyed PromiseManyArray
I've tried reloading the parent object and also reloading the hasMany to get a 'fixed' PromiseManyArray but it still returns the broken one.
I know this is a pretty specific setup but would appreciate any advice for any level of our issue - the websocket collision, the broken hasMany, etc.
I wouldn't know where to being creating a twiddle for this, but if my workaround doesn't actually work I'll give it a shot.

Node async race condition

I have a route in Node that:
Accepts a user ID
Gets the user from redis
Updates a property on the user
Saves the user back to redis
As redis uses async methods to get and save, if another request comes in for the same user, I get stale results.
What is the best pattern to make sure the second request doesn't process until the first is finished? Using sync versions of get and set seems wrong as it's locking, although I don't think it will have any noticeable effect in my application.

Ensuring a certain graphQL resolver runs last

Is there a means to ensure that, during the resolution process, a certain resolver always gets run last?
My particular use case is that I'd like to optionally return some information to the client about the query that ran - this information is only available once all other resolvers have run, which is arbitrary depending on what users query.
Let's say my root resolver looks like this:
{
myQuery(args, context, info) {
return this.dataLayer.load();
}
// This resolver must always be run last
queryInfo(args, context, info) {
return this.dataLayer.queryInfo();
}
}
How can I always ensure that queryInfo() is run last?
Approaches considered:
Including the data from queryInfo() in the data returned from myQuery(). In this case, the data returned by queryInfo doesn't strictly belong to myQuery so it's less attractive semantically.
Adding special request tracking code to dataLayer. Has some timing weaknesses and is also more complicated than its worth. If that's the only way we'll do without :).
Manually tweaking the resolved data after GraphQL is done with it. Effectively bypasses a really useful feature in GraphQL, would rather not.

Meteor - Why should I use this.userId over Meteor.userId() whenever possible?

Judging from this comment by David Glasser in the GitHub issues:
this.userId is the primary API and Meteor.userId() is syntactic sugar for users new to JavaScript who might not understand the details of successfully using this yet
It seems like we should use this.userId whenever possible (such as inside a method function, where you can use both), and only use Meteor.userId() inside publish functions. If this assumption is correct, why?
(Referring to the relevant bits of the code would also be helpful, I can't seem to find it)
Your question seems to conflate Meteor.userId() and Meteor.user(). The body of the question seems to be asking about the former while the subject line is asking about the latter. I'll try to address both.
On the server, within a publish function, calling either Meteor.userId() or Meteor.user() will cause an error. Instead, use this.userId or Meteor.users.findOne(this.userId), respectively. However, note that the publish function is only called when a client subscribes. If you want the publication to change when the user record changes, you'll need to observe() the cursor returned by Meteor.users.find(this.userId) and take appropriate action when the record changes.
On the server, while a method call is being processed, Meteor.userId() and Meteor.user() will correspond to the ID of the calling user and their record, respectively. However, be aware that calls to Meteor.user() will result in a DB query because they are essentially equivalent to Meteor.users.findOne(Meteor.userId()).
Directly within a method call, you can also use this.userId instead of Meteor.userId(), but you are unlikely to see a significant performance difference. When the server receives the method call, it runs your method implementation with the user's ID (and some other info) stored in a particular slot on the fiber. Meteor.userId() just retrieves the ID from the slot on the current fiber. That should be fast.
It's generally easier to refactor code that uses Meteor.userId() than this.userId because you can't use this.userId outside of the method body (e.g. this won't have a 'userId' property within a function you call from the method body) and you can't use this.userId on the client.
On the client, Meteor.userId() and Meteor.user() will not throw errors and this.userId will not work. Calls to Meteor.user() are essentially equivalent to Meteor.users.findOne(Meteor.userId()), but since this corresponds to a mini-mongo DB query, performance probably won't be a concern. However, for security reasons the object returned by Meteor.user() may be incomplete (especially if the autopublish package is not installed).
Simply speaking, Meteor.userId() queries the DB everytime you use it. In client side ( logically ), it looks fine - since we have minimongo.
In server side, using Meteor.userId(), consumes extra resources on SERVER, which, at times is undesired.
Now, this.userId is more over like a session variable m ie it will have a value only when there is a userid attached with the current session. And hence,using 'this' reference wont go and fetch the database everytime, but rather than that it used the active session userId.
Consider performance as a factor. That is the main reason for using this.userId rather than Meteor.userId

Categories

Resources