Using dataloader for resolvers with nested data from ArangoDB - javascript

I'm implementing a GraphQL API over ArangoDB (with arangojs) and I want to know how to best implement dataloader (or similar) for this very basic use case.
I have 2 resolvers with DB queries shown below (both of these work), the first fetches Persons, the 2nd fetches a list of Record objects associated with a given Person (one to many). The association is made using ArangoDB's edge collections.
import { Database, aql } from 'arangojs'
import pick from 'lodash/pick'
const db = new Database('http://127.0.0.1:8529')
db.useBasicAuth('root', '')
db.useDatabase('_system')
// id is the auto-generated userId, which `_key` in Arango
const fetchPerson = id=> async (resolve, reject)=> {
try {
const cursor = await db.query(aql`RETURN DOCUMENT("PersonTable", ${String(id)})`)
// Unwrap the results from the cursor object
const result = await cursor.next()
return resolve( pick(result, ['_key', 'firstName', 'lastName']) )
} catch (err) {
return reject( err )
}
}
// id is the auto-generated userId (`_key` in Arango) who is associated with the records via the Person_HasMany_Records edge collection
const fetchRecords = id=> async (resolve, reject)=> {
try {
const edgeCollection = await db.collection('Person_HasMany_Records')
// Query simply says: `get all connected nodes 1 step outward from origin node, in edgeCollection`
const cursor = await db.query(aql`
FOR record IN 1..1
OUTBOUND DOCUMENT("PersonTable", ${String(id)})
${edgeCollection}
RETURN record`)
return resolve( cursor.map(each=>
pick(each, ['_key', 'intro', 'title', 'misc']))
)
} catch (err) {
return reject( err )
}
}
export default {
Query: {
getPerson: (_, { id })=> new Promise(fetchPerson(id)),
getRecords: (_, { ownerId })=> new Promise(fetchRecords(ownerId)),
}
}
Now, if I want to fetch the Person data with the Records as nested data, in a single request, the query would be this:
aql`
LET person = DOCUMENT("PersonTable", ${String(id)})
LET records = (
FOR record IN 1..1
OUTBOUND person
${edgeCollection}
RETURN record
)
RETURN MERGE(person, { records: records })`
So how should I update my API to employ batch requests / caching? Can I somehow run fetchRecords(id) inside of fetchPerson(id) but only when fetchPerson(id) is invoked with the records property included?
The setup file here, notice I'm using graphql-tools, because I took this from a tutorial somewhere.
import http from 'http'
import db from './database'
import schema from './schema'
import resolvers from './resolvers'
import express from 'express'
import bodyParser from 'body-parser'
import { graphqlExpress, graphiqlExpress } from 'apollo-server-express'
import { makeExecutableSchema } from 'graphql-tools'
const app = express()
// bodyParser is needed just for POST.
app.use('/graphql', bodyParser.json(), graphqlExpress({
schema: makeExecutableSchema({ typeDefs: schema, resolvers })
}))
app.get('/graphiql', graphiqlExpress({ endpointURL: '/graphql' })) // if you want GraphiQL enabled
app.listen(3000)
And here's the schema.
export default `
type Person {
_key: String!
firstName: String!
lastName: String!
}
type Records {
_key: String!
intro: String!
title: String!
misc: String!
}
type Query {
getPerson(id: Int!): Person
getRecords(ownerId: Int!): [Record]!
}
type Schema {
query: Query
}
`

So, the real benefit of dataloader is that it stops you from doing n+1 queries. Meaning for example, if in your schema, Person had a field records, and then you asked for the first 10 people's 10 records. In a naive gql schema, that would cause 11 requests to be fired: 1 for the first 10 people, and then one for each of their records.
With dataloader implemented, you cut that down to two requests: one for the first 10 people, and then one for all of the records of the first ten people.
With your schema above, it doesn't seem that you can benefit in any way from dataloader, since there's no possibility of n+1 queries. The only benefit you might get is caching if you make multiple requests for the same person or records within a single request (which again, isn't possible based on your schema design unless you are using batched queries).
Let's say you want the caching though. Then you could do something like this:
// loaders.js
// The callback functions take a list of keys and return a list of values to
// hydrate those keys, in order, with `null` for any value that cannot be hydrated
export default {
personLoader: new DataLoader(loadBatchedPersons),
personRecordsLoader: new DataLoader(loadBatchedPersonRecords),
};
You then want to attach the loaders to your context for easy sharing. Modified example from Apollo docs:
// app.js
import loaders from './loaders';
app.use(
'/graphql',
bodyParser.json(),
graphqlExpress(req => {
return {
schema: myGraphQLSchema,
context: {
loaders,
},
};
}),
);
Then, you can use them from the context in your resolvers:
// ViewerType.js:
// Some parent type, such as `viewer` often
{
person: {
type: PersonType,
resolve: async (viewer, args, context, info) => context.loaders.personLoader,
},
records: {
type: new GraphQLList(RecordType), // This could also be a connection
resolve: async (viewer, args, context, info) => context.loaders.personRecordsLoader;
},
}

I guess I was confused about the capability of dataloader. Serving nested data was really the stumbling block for me.
This is the missing code. The export from resolvers.js needed a person property,
export default {
Person: {
records: (person)=> new Promise(fetchRecords(person._key)),
},
Query: {
getPerson: (_, { id })=> new Promise(fetchPerson(id)),
getRecords: (_, { ownerId })=> new Promise(fetchRecords(ownerId)),
},
}
And the Person type in the schema needed a records property.
type Person {
_key: String!
firstName: String!
lastName: String!
records: [Records]!
}
Seems these features are provided by Apollo graphql-tools.

Related

.findByIdAndUpdate is not a function error - coming from controller where the model has been required?

I'm working on a web application for my company to view a database of customers and their data using MongoDB, Mongoose, and Express. Our company resells used copiers/printers and also provides maintenance contracts for machines. I want to save each customer as a document, with machines as separate linked documents.
I have models, controllers, and routes set up for customers and machines. I am getting the following error when trying to delete a machine from it's customer:
Customer.findByIdAndUpdate is not a function
TypeError: Customer.findByIdAndUpdate is not a function at module.exports.deleteMachine (C:\controllers\machines.js:21:20) at C:\utils\catchAsync.js:3:9 at Layer.handle [as handle_request] (C:\node_modules\express\lib\router\layer.js:95:5) at next (C:\node_modules\express\lib\router\route.js:144:13) at module.exports.getCustomer (C:\middleware.js:15:5) at processTicksAndRejections (node:internal/process/task_queues:96:5)
My code is as follows:
Controller for Machines:
const Customer = require('../models/customer');
const Machine = require('../models/machine');
module.exports.deleteMachine = async (req, res) => {
const { id, machineId } = req.params;
await Customer.findByIdAndUpdate(id, { $pull: { machines: machineId } });
await Machine.findByIdAndDelete(machineId);
req.flash('success', 'Machine has been deleted');
res.redirect(`/customers/${id}`);
};
Route for Machines:
router.delete('/:machineId', getCustomer, catchAsync(machines.deleteMachine));
the "getCustomer" middleware is as follows - its only purpose is to ensure a valid customer is being requested and to set the "foundCustomer" to make my life easier elsewhere. I don't think it is the issue, but I'm including it just for clarity:
module.exports.getCustomer = async (req, res, next) => {
const { id } = req.params;
const customer = await Customer.findById(id).populate({ path: 'machines' });
if (!customer) {
req.flash('error', 'Sorry, that customer cannot be found!');
return res.redirect('/customers');
}
res.locals.foundCustomer = customer;
next();
};
The relevant routes have been set as follows in my app.js:
const customerRoutes = require('./routes/customers');
const machineRoutes = require('./routes/machines');
app.use('/customers', customerRoutes);
app.use('/customers/:id/machines', machineRoutes);
I haven't run into any issues with other machine routes, so I'm not sure why this one is throwing an error. This application is actually the second version that I've made, and the first version uses the exact same code, with no issue. So I'm super stumped.
Any help is greatly appreciated!
Customer Model -
const customerSchema = new Schema({
customer: String,
customerID: String,
category: {
type: String,
enum: ['contracted', 'billable']
},
contacts: [contactSchema],
address: String,
city: String,
state: String,
zip: String,
county: String,
machines: [
{
type: Schema.Types.ObjectId,
ref: 'Machine'
}
],
notes: [noteSchema]
});
I'm a dummy. I exported the Customer model as part of an array of exports like this:
const Customer = mongoose.model('Customer', customerSchema);
module.exports = {
Customer: Customer,
Note: Note,
Contact: Contact
};
When requiring the model in my Machine controller I had it formatted as:
const Customer = require('../models/customer');
To get it working correctly I needed to require it like this:
const { Customer } = require('../models/customer');
After making that change everything is working correctly, and I can move on with my life/application.

Does Apollo cache the returned data from a mutation

I'm using Apollo Client in a React app and I need to do a mutation then keep the returned data for later use (but I won't have access to the variables ), do I have to use a another state management solution or can we do this in Apollo?
I've read about doing this with query but not mutation.
Here's my code so far
// Mutation
const [myMutation, { data, errors, loading }] = useMutation(MY_MUTATION, {
onCompleted({ myMutation }) {
console.log('myMutation: ', myMutation.dataToKeep);
if (myMutation && myMutation.dataToKeep)
SetResponse(myMutation.dataToKeep);
},
onError(error) {
console.error('error: ', error);
},
});
//How I call it
onClick={() => {
myMutation({
variables: {
input: {
phoneNumber: '0000000000',
id: '0000',
},
},
});
}}
edit:
here is the mutation
export const MY_MUTATION = gql`
mutation MyMutation($input: MyMutationInput!) {
myMutation(input: $input) {
dataToKeep
expiresAt
}
}
`;
and the schema for this mutation
MyMutationInput:
phoneNumber: String!
id: String!
MyMutationPayload:
dataToKeep
expiresAt
Case 1: Payload is using common entities
Simply put, the Apollo client's cache keeps everything that's received from queries and mutations, though the schema needs to include id: ID! fields and any query needs to use both the id and __typename fields on relevant nodes for the client to know which part of the cache to update.
This assumes that the mutation payload is common data from the schema that can be retrieved through a normal query. This is the best case scenario.
Given the following schema on the server:
type User {
id: ID!
phoneNumber: String!
}
type Query {
user(id: String!): User!
}
type UpdateUserPayload {
user: User!
}
type Mutation {
updateUser(id: String!, phoneNumber: String!): UpdateUserPayload!
}
And assuming a cache is used on the client:
import { InMemoryCache, ApolloClient } from '#apollo/client';
const client = new ApolloClient({
// ...other arguments...
cache: new InMemoryCache(options)
});
The cache generates a unique ID for every identifiable object included in the response.
The cache stores the objects by ID in a flat lookup table.
Whenever an incoming object is stored with the same ID as an existing object, the fields of those objects are merged.
If the incoming object and the existing object share any fields, the incoming object overwrites the cached values for those fields.
Fields that appear in only the existing object or only the incoming object are preserved.
Normalization constructs a partial copy of your data graph on your
client, in a format that's optimized for reading and updating the
graph as your application changes state.
The client's mutation should be
mutation UpdateUserPhone($phoneNumber: String!, $id: String!) {
updateUser(id: $id, phoneNumber: $phoneNumber) {
user {
__typename # Added by default by the Apollo client
id # Required to identify the user in the cache
phoneNumber # Field that'll be updated in the cache
}
}
}
Then, any component using this user through the same Apollo client in the app will be up to date automatically. There's nothing special to do, the client will use the cache by default and trigger renders whenever the data changes.
import { gql, useQuery } from '#apollo/client';
const USER_QUERY = gql`
query GetUser($id: String!) {
user(id: $id) {
__typename
id
phoneNumber
}
}
`;
const UserComponent = ({ userId }) => {
const { loading, error, data } = useQuery(USER_QUERY, {
variables: { id: userId },
});
if (loading) return null;
if (error) return `Error! ${error}`;
return <div>{data.user.phoneNumber}</div>;
}
The fetchPolicy option defaults to cache-first.
Case 2: Payload is custom data specific to the mutation
If the data is in fact not available elsewhere in the schema, it won't be possible to use the Apollo cache automatically as explained above.
Use another state management solution
A couple options:
local storage
React's context API
etc.
Here's an example from the Apollo GraphQL documentation using the localStorage:
const [login, { loading, error }] = useMutation(LOGIN_USER, {
onCompleted({ login }) {
localStorage.setItem('token', login.token);
localStorage.setItem('userId', login.id);
}
});
Define a client-side schema
This is a pure Apollo GraphQL solution since the client is also a state management library, which enables useful developer tooling and helps reason about the data.
Create a local schema.
// schema.js
export const typeDefs = gql`
type DataToKeep {
# anything here
}
extend type Query {
dataToKeep: DataToKeep # probably nullable?
}
`;
Initialize a custom cache
// cache.js
export const dataToKeepVar = makeVar(null);
export const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
dataToKeep: {
read() {
return dataToKeepVar();
}
},
}
}
}
});
Apply the schema override at the client's initialization
import { InMemoryCache, Reference, makeVar } from '#apollo/client';
import { cache } from './cache';
import { typeDefs } from './schema';
const client = new ApolloClient({
cache,
typeDefs,
// other options like, headers, uri, etc.
});
Keep track of the changes in the mutation:
const [myMutation, { data, errors, loading }] = useMutation(MY_MUTATION, {
onCompleted({ myMutation }) {
if (myMutation && myMutation.dataToKeep)
dataToKeepVar(myMutation.dataToKeep);
}
});
Then, query the #client field.
import { gql, useQuery } from '#apollo/client';
const DATA_QUERY = gql`
query dataToKeep {
dataToKeep #client {
# anything here
}
}
`;
const AnyComponent = ({ userId }) => {
const { loading, error, data } = useQuery(DATA_QUERY);
if (loading) return null;
if (error) return `Error! ${error}`;
return <div>{JSON.stringify(data.dataToKeep)}</div>;
}
See also the documentation on managing local state.

Best practise to combine multiple rest calls to populate 1 graphQL type in apollo-server

I have graphql User type that needs information from multiple REST api's and different servers.
Basic example: get the user firstname from rest domain 1 and get lastname from rest domain 2. Both rest domain have a common "userID" attribute.
A simplefied example of my resolver code atm:
user: async (_source, args, { dataSources }) => {
try {
const datasource1 = await dataSources.RESTAPI1.getUser(args.id);
const datasource2 = await dataSources.RESTAPI2.getUser(args.id);
return { ...datasource1, ...datasource2 };
} catch (error) {
console.log("An error occurred.", error);
}
return [];
}
This works fine for this simplefied version, but I have 2 problems with this solution:
first, IRL there is a lot of logic going into merging the 2 json results. Since some field are shared but have different data (or are empty). So it's like cherry picking both results to create a combined result.
My second problem is that this is still a waterfall method. First get the data from restapi1, when thats done call restapi2. Basicly apollo-server is reintroducing rest-waterfall-fetch graphql tries to solve.
Keeping these 2 problems in mind.. Can I optimise this piece of code or rewrite is for better performance or readability? Or are there any packages that might help with this behavior?
Many thanks!
With regard to performance, if the two calls are independent of one another, you can utilize Promise.all to execute them in parallel:
const [dataSource1,dataSource2] = await Promise.all([
dataSources.RESTAPI1.getUser(args.id),
dataSources.RESTAPI2.getUser(args.id),
])
We normally let GraphQL's default resolver logic do the heavy lifting, but if you're finding that you need to "cherry pick" the data from both calls, you can return something like this in your root resolver:
return { dataSource1, dataSource2 }
and then write resolvers for each field:
const resolvers = {
User: {
someField: ({ dataSource1, dataSource2 }) => {
return dataSource1.a || dataSource2.b
},
someOtherField: ({ dataSource1, dataSource2 }) => {
return someCondition ? dataSource1.foo : dataSource2.bar
},
}
}
Assuming your user resolver returns type User forsake...
type User {
id: ID!
datasource1: RandomType
datasource1: RandomType
}
You can create individual resolvers for each field in type User, this can reduce the complexity of the user Query, to only the requested fields.
query {
user {
id
datasource1 {
...
}
}
}
const resolvers = {
Query: {
user: () => {
return { id: "..." };
}
},
User: {
datasource1: () => { ... },
datasource2: () => { ... } // i wont execute
}
};
datasource1 & datasource2 resolvers will only execute in parallel, after Query.user executes.
For parallel call.
const users = async (_source, args, { dataSources }) => {
try {
const promises = [
dataSources.RESTAPI1,
dataSources.RESTAPI2
].map(({ getUser }) => getUser(args.id));
const data = await Promise.all(promises);
return Object.assign({}, ...data);
} catch (error) {
console.log("An error occurred.", error);
}
return [];
};

Paginating the combined results ofa union

I'm still a little new to graphql and Apollo. But I'm curious, if I create a search query that returns a union of two types, how would I go about modifying the combined results of that union. Using the Apollo Docs example (Unions and interface):
union Result = Book | Author
type Book {
title: String
}
type Author {
name: String
}
type Query {
search: [Result]
}
const resolvers = {
Result: {
__resolveType(obj, context, info){
if(obj.name){
return 'Author';
}
if(obj.title){
return 'Book';
}
return null;
},
},
Query: {
search: () => { ... }
},
};
const server = new ApolloServer({
typeDefs,
resolvers,
});
server.listen().then(({ url }) => {
console.log(`🚀 Server ready at ${url}`)
});
After both the Book and Author resolvers complete, let's say I have 200 results, 50 books and 150 Authors. If I wanted to limit that to 100 of either, sorted alphabetically, how would I access the resolved array before return it to the client? The Apollo Server type definitions has a IResolverOptions interface with a resolve function, but if I add that to my Result resolver I get an error saying that Result.resolve is defined in the resolver but not the schema.

Graphql error on subfields with graphql-yoga

I'm trying to query a graphql API via a proxy of another graphql API and am receiving an error. I'm using graphql yoga for my server and connecting to another graphql API from a CMS. Here is my code:
server.js
const { GraphQLServer } = require('graphql-yoga');
const Prismic = require('./prismic.config.js');
const gql = require('graphql-tag');
const typeDefs = `
type Query {
page(lang: String, uid: String): Page
}
type Page {
page_title: [TextField]
}
type TextField {
text: String
}
`
const resolvers = {
Query: {
page: (parent, args, context, info) => {
const query = gql`${context.request.body.query}`;
const result = context.Prismic.query({
query,
variables: { ...args }
})
.then(resp => {
return resp.data.page;
})
.catch(err => console.log(err));
return result;
}
}
}
const server = new GraphQLServer({
typeDefs,
resolvers,
context: req => ({ ...req, Prismic })
})
server.start(() => console.log('Server is running on localhost:4000'))
Here is my query below from graphql playground that comes with Graphql Yoga:
query {
page(lang: "en-gb", uid: "homepage") {
page_title {
text
}
}
}
The error i'm receiving is:
'Query does not pass validation. Violations:\n\nField \'page_title\'
of type \'Json\' must not have a sub selection. (line 3, column 5):\n
page_title {\n ^' } },
The strange thing is I can get a valid response if I hardcode the query without the nested text field as the error suggests on the server like so:
// const query = gql`${context.request.body.query}`;
const query = gql`
query($uid: String!) {
page(lang: "en-gb", uid: $uid) {
page_title
}
}
`;
Attempting to modify my query in graphql playground to not include the nested text field like so:
query {
page(lang: "en-gb", uid: "homepage") {
page_title
}
}
Gives my the following error and does not allow me to make the request at all:
field "page_title" of type "[TextField]" must have a selection of
subfields. Did you mean "page_title { ... }"?
The error suggests that I need to add the nested subfield of text which is intended but when I use this query instead of the hardcoded one on the server it gives me the error mentioned before.
Not sure if i've gone wrong somewhere in my setup?
Thanks
In your GraphQL schema
page_title: [TextField] is not one of the Scalar Types
As a result, during making a query you need to define what exactly fields you need to fetch?
And your fields in the query should be expanded to the level of having only scalar types, so GraphQL will know how to resolve your query.
So this is the only query that should be from the first level (from graphql playground that comes with Graphql Yoga) :
query {
page(lang: "en-gb", uid: "homepage") {
page_title {
text
}
}
}
But the error from the server throws from your approach to make graphql query inside graphql resolver:
const result = context.Prismic.query({
query,
variables: { ...args }
})
So I'm 100% sure that the page_title in Prismic has the custom scalar - JSON. As a result, you can't use the same query for this request.

Categories

Resources