Ensuring a certain graphQL resolver runs last - javascript

Is there a means to ensure that, during the resolution process, a certain resolver always gets run last?
My particular use case is that I'd like to optionally return some information to the client about the query that ran - this information is only available once all other resolvers have run, which is arbitrary depending on what users query.
Let's say my root resolver looks like this:
{
myQuery(args, context, info) {
return this.dataLayer.load();
}
// This resolver must always be run last
queryInfo(args, context, info) {
return this.dataLayer.queryInfo();
}
}
How can I always ensure that queryInfo() is run last?
Approaches considered:
Including the data from queryInfo() in the data returned from myQuery(). In this case, the data returned by queryInfo doesn't strictly belong to myQuery so it's less attractive semantically.
Adding special request tracking code to dataLayer. Has some timing weaknesses and is also more complicated than its worth. If that's the only way we'll do without :).
Manually tweaking the resolved data after GraphQL is done with it. Effectively bypasses a really useful feature in GraphQL, would rather not.

Related

Why can't we get a returned value from sendTransaction() run on a smart contract?

All discussions on this mention that it's impossible to get a returned value from sendTransaction() run on a contract function, where the contract state is being changed. I don't understand why the returned value can't be recorded in the transaction log on the blockchain, similarly to events, and so then it could be retrieved on the transaction confirmation:
web3.eth.sendTransaction(...)
.on('confirmation', function(1, receipt){ ... // retrieving value returned by smart contract function here })
Logs are made for describing the events emitted from the contract - which is the current solution for getting data from transactions - , so the return data can't go in there.
Including a return_data in the receipt, though, has been discussed and apparently forgotten. EIP758, has the following sollution:
EIP 658 originally proposed adding return data to transaction receipts. However, return data is not charged for (as it is not stored on the blockchain), so adding it to transaction receipts could result in DoS and spam opportunities. Instead, a simple Boolean status field was added to transaction receipts. This modified version of EIP 658 was included in the Byzantium hard fork. While the status field is useful, applications often need the return data as well.
The primary advantage of using the strategy outlined here is efficiency: no extra data needs to be stored on the blockchain, and minimal extra computational load is imposed on nodes. Since light clients have the current state, they can compute and send return data notifications without contacting a server. Although after-the-fact lookups of the return value would not be supported, this is consistent with the conventional use of return data, which are only accessible to the caller when the function returns, and are not stored for later use.
And this go client pull request, which didn't go through because the best solution would be, instead, an ethereum hard fork - even though we had some since then and it didn't happen.

Firestore Offline Cache & Promises

This question is a follow-up on Firestore offline cache. I've read the offline cache documentation but am confused on one point.
One commenter answered in the prior question (~ year ago):
You Android code that interact with the database will be the same
whether you're connected or not, since the SDK simply works the same."
In the API documentation for DocumentReference's set method, I just noticed that it says:
Returns
non-null Promise containing void A promise that resolves once
the data has been successfully written to the backend. (Note that it
won't resolve while you're offline).
Emphasis mine. Wouldn't this bit in the documentation suggest that the code won't behave the same, or am I missing something? If I'm waiting on the .set() to resolve before allowing some user interaction, it sounds from this bit like I need to adjust the code for an offline case differently than I would normally.
The CollectionReference's add method worries me a bit more. It doesn't have exactly the same note but says (emphasis mine):
A Promise that resolves with a DocumentReference pointing to the newly created document after it has been written to the backend.
That is a little more vague as not sure if "backend" in this case is a superset of "cache" and "server" or if it's meant to denote only the server. If this one doesn't resolve, that would mean that the following wouldn't work, correct?
return new Promise((resolve, reject) => {
let ref = firestore.collection(path)
ref.add(data)
.then(doc => {
resolve({ id: doc.id, data: data })
})
...
})
Meaning, the .add() would not resolve, .then() would not run, and I wouldn't have access to the id of the document that was just added. I hope I'm just misunderstanding something and that my code can continue to function as-is both online and offline.
You have two of concerns here, which are not really related. I'll explain both of them separately.
For the most part, developers don't typically care if the promise from a document update actually resolves or not. It's almost always "fire and forget". What would an app gain to know that the update hit the server, as long as the app behaves the same way regardless? The local cache has been updated, and all future queries will show that the document has been updated, even if the update hasn't been synchronized with the server yet.
The primary exception to this is transactions. Transactions require that the server be online, because round trips need to be made between the client and server in order to ensure that the update was atomic. Transactions simply don't work offline. If you need to know if a transaction worked, you need to be online. Unlike normal document writes, transactions don't persist in local cache. If the app is killed before the transaction completes on the server, the transaction is lost.
Your second concern is about newly added documents where the id of the document isn't defined at the time of the update. It's true that add() returns a promise that only resolves when the new document exists on the server. You can't know the id of the document until the promise delivers you the DocumentReference of the new document.
If this behavior doesn't work for you, you can generate a new id for a document by simply calling doc() with no arguments instead of add(). doc() immediately returns the DocumentReference of the new (future) document that hasn't been written (until you choose to write it). In both the case of doc() and add(), these DocumentReference objects contain unique ids generated on the client. The difference is that with doc(), you can use the id immediately, because you get a DocumentReference immediately. With add(), you can't, because the DocumentReference isn't provided until the promise resolves. If you need that new document id right now, even while offline, use doc() instead of add(). You can then use the returned DocumentReference to create the document offline, stored in the local cache, and synchronized later. The update will then return a promise that resolves when the document is actually written.

Right way of passing consistent data from DB to user without repeatedly querying

Database stores some data about the user which almost never change. Well sometimes information might change if the user wants to edit his name for example.
Data information is about each user's name, username and his company data.
The first two are being shown to his navigation bar all the time using ejs, like User_1 is logged in, his company profile data when he needs to create an invoice.
My current way is to fetch user data through middleware using router.use so the extracted information is always available through all routes/views, for example:
router.use(function(req, res ,next) { // this block of code is called as middleware in every route
req.getConnection(function(err,conn){
uid = req.user.id;
if(err){
console.log(err);
return next("Mysql error, check your query");
}
var query = conn.query('SELECT * FROM user_profile WHERE uid = ? ', uid, function(err,rows){
if(err){
console.log(err);
return next(err, uid, "Mysql error, check your query");
}
var userData = rows;
return next();
});
});
})
.
I understand that this is not an optimal way of passing user profile data to every route/view since it makes new DB queries every time the user navigates through the application.
What would be a better way of having this data available without repeating the same query in each route yet having them re-fetched once the user changes a portion of this data, like his fullname ?
You've just stumbled into the world of "caching", welcome! Caching is a very popular choice for use cases like this, as well as many others. A cache is essentially somewhere to store data that you can get back much quicker than making a full DB query, or a file read, etc.
Before we go any further, it's worth considering your use case. If you're serving only a few users and have a low load on your service, caching might be over-engineering and in fact making a DB request might be the simplest idea. Adding caching can add a lot of complexity to your code as things move forward, not enough to scare you, but enough to cause hard to trace bugs. So consider for a moment your service load, if it's not very high (say an internal application for somewhere you work with only maybe a few requests every few minutes) then just reading from the DB is probably not going to slow down a request too much. In this case, reading from the DB is the simplest and probably best solution. However, if you're noticing that this DB request is slowing down your application for requests or making it harder to scale up, then caching might be for you.
A really popular approach for this would be to get something like "redis" which is a key-value database that holds everything in memory (RAM). Redis can sit as a service like MySQL and has a very basic query language. It is blindingly fast and can scale to enormous loads. If you're using Express, there are a number of NPM modules that help you access a redis instance. Simply push in your credentials and you can then make GET and SET requests (to get data or to set data).
In your example, you may wish to store a users profile in a JSON format against their user id or username in redis. Then, create a function called getUserProfile which takes in the ID or username. This can then look it up in redis, if it finds the record then it can return it to your main controller logic. If it does not, it can look it up in your MySQL database, save it in redis, and then return it to the controller logic (so it'll be able to get it from cache next time).
Your next problem is known for being a very pesky problem in computer science. It's "Cache Invalidation", in this case if your user profile updates you want to "invalidate" your cache. A way of doing this would be to update your cached version when the user updates their profile (or any other data saved). Alternatively, you could also just remove the cached version from redis and then next time it's requested from getUserProfile, it will be fetched from the DB fresh, and then put into redis for next time.
There are many other ways to approach this, but this will most likely solve your problem in the simplest way without too much overhead. It will also be easy to expand in the future!

Meteor - Why should I use this.userId over Meteor.userId() whenever possible?

Judging from this comment by David Glasser in the GitHub issues:
this.userId is the primary API and Meteor.userId() is syntactic sugar for users new to JavaScript who might not understand the details of successfully using this yet
It seems like we should use this.userId whenever possible (such as inside a method function, where you can use both), and only use Meteor.userId() inside publish functions. If this assumption is correct, why?
(Referring to the relevant bits of the code would also be helpful, I can't seem to find it)
Your question seems to conflate Meteor.userId() and Meteor.user(). The body of the question seems to be asking about the former while the subject line is asking about the latter. I'll try to address both.
On the server, within a publish function, calling either Meteor.userId() or Meteor.user() will cause an error. Instead, use this.userId or Meteor.users.findOne(this.userId), respectively. However, note that the publish function is only called when a client subscribes. If you want the publication to change when the user record changes, you'll need to observe() the cursor returned by Meteor.users.find(this.userId) and take appropriate action when the record changes.
On the server, while a method call is being processed, Meteor.userId() and Meteor.user() will correspond to the ID of the calling user and their record, respectively. However, be aware that calls to Meteor.user() will result in a DB query because they are essentially equivalent to Meteor.users.findOne(Meteor.userId()).
Directly within a method call, you can also use this.userId instead of Meteor.userId(), but you are unlikely to see a significant performance difference. When the server receives the method call, it runs your method implementation with the user's ID (and some other info) stored in a particular slot on the fiber. Meteor.userId() just retrieves the ID from the slot on the current fiber. That should be fast.
It's generally easier to refactor code that uses Meteor.userId() than this.userId because you can't use this.userId outside of the method body (e.g. this won't have a 'userId' property within a function you call from the method body) and you can't use this.userId on the client.
On the client, Meteor.userId() and Meteor.user() will not throw errors and this.userId will not work. Calls to Meteor.user() are essentially equivalent to Meteor.users.findOne(Meteor.userId()), but since this corresponds to a mini-mongo DB query, performance probably won't be a concern. However, for security reasons the object returned by Meteor.user() may be incomplete (especially if the autopublish package is not installed).
Simply speaking, Meteor.userId() queries the DB everytime you use it. In client side ( logically ), it looks fine - since we have minimongo.
In server side, using Meteor.userId(), consumes extra resources on SERVER, which, at times is undesired.
Now, this.userId is more over like a session variable m ie it will have a value only when there is a userid attached with the current session. And hence,using 'this' reference wont go and fetch the database everytime, but rather than that it used the active session userId.
Consider performance as a factor. That is the main reason for using this.userId rather than Meteor.userId

BreezeJs: SaveChanges() server response getting dropped

I have breezeJs running in an angular app on mobile device (cordova), which talks to .Net WebApi.
Everything works great, except once in a while the device will get PrimaryKey violations (from my SQL Server).
I think I narrowed it down to only happening when data connection is shakey on the device.
The only way I can figure these primary key violations are happening is somehow the server is Saving Changes, but the mobile connection drops out before the response can come back from server that everything saved OK.
What is supposed to happen when BreezeJS doesn't hear back from server after calling SaveChanges?
Anyone familiar with BreezeJS know of a way to handle this scenario?
I've had to handle the same scenario in my project. The approach I took was two part:
Add automatic retries to failed ajax requests. I'm using breeze with jQuery, so I googled "jQuery retry ajax". There's many different implementations, mine is somewhat custom, all center around hijacking the onerror callback as well as the deferred's fail handler to inject retry logic. I'm sure Angular will have similar means of retrying dropped requests.
In the saveChanges fail handler, add logic like this:
...
function isConcurrencyException(reason: any) {
return reason && reason.message && /Store update, insert, or delete statement affected an unexpected number of rows/.test(reason.message);
}
function isConnectionFailure(reason: any): boolean {
return reason && reason.hasOwnProperty('status') && reason.status === 0
}
entityManager.saveChanges()
.then(... yay ...)
.fail(function(reason) {
if (isConnectionFailure(reason)) {
// retry attempts failed to reach server.
// notify user and save to local storage....
return;
}
if (isConcurrencyException(reason)) {
// EF is not letting me save the entities again because my previous save (or another user's save) moved the concurrency stamps on the record. There's also the possibility that a record I'm try to save was deleted by another user.
// recover... in my case I kept it simple and simply attempt to reload the entity. If nothing is returned I know the entity was deleted. Otherwise I now have the latest version. In either case a message is shown to the user.
return;
}
if (reason.entityErrors) {
// We have an "entityErrors" property... this means the saved failed due to server-side validation errors.
// do whatever you do to handle validation errors...
return;
}
// an unexpected exception. let it bubble up.
throw reason;
})
.done(); // terminate the promise chain (may not be an equivalent in Angular, not sure).
One of the ways you can test spotty connections is to use Fiddler's AutoResponder tab. Set up a *.drop rule with a regex that matches your breeze route and check the "Enable Automatic Responses" box when you want to simulate dropped requests.
This is a somewhat messy problem to solve- no one size fits all answer, hope this helps.
NOTE
Ward makes a good point in the comments below. This approach is not suitable in situations where the entity's primary key is generated on the server (which would be the case if your db uses identity columns for PKs) because the retry logic could cause duplicate inserts.

Categories

Resources