Paginating the combined results ofa union - javascript

I'm still a little new to graphql and Apollo. But I'm curious, if I create a search query that returns a union of two types, how would I go about modifying the combined results of that union. Using the Apollo Docs example (Unions and interface):
union Result = Book | Author
type Book {
title: String
}
type Author {
name: String
}
type Query {
search: [Result]
}
const resolvers = {
Result: {
__resolveType(obj, context, info){
if(obj.name){
return 'Author';
}
if(obj.title){
return 'Book';
}
return null;
},
},
Query: {
search: () => { ... }
},
};
const server = new ApolloServer({
typeDefs,
resolvers,
});
server.listen().then(({ url }) => {
console.log(`🚀 Server ready at ${url}`)
});
After both the Book and Author resolvers complete, let's say I have 200 results, 50 books and 150 Authors. If I wanted to limit that to 100 of either, sorted alphabetically, how would I access the resolved array before return it to the client? The Apollo Server type definitions has a IResolverOptions interface with a resolve function, but if I add that to my Result resolver I get an error saying that Result.resolve is defined in the resolver but not the schema.

Related

Asynchronous verification within the .map function

I am developing the backend of an application using Node JS, Sequelize and Postgres database.
When the course is registered, the user must inform which organizations, companies and teachers will be linked to it.
The organization IDs are passed through an array to the backend, I am trying to do a check to make sure that the passed IDs exist.
What I've done so far is this:
const { organizations } = req.body;
const organizationsArray = organizations.map(async (organization) => {
const organizationExists = await Organization.findByPk(organization);
if (!organizationExists) {
return res
.status(400)
.json({ error: `Organization ${organization} does not exists!` });
}
return {
course_id: id,
organization_id: organization,
};
});
await CoursesOrganizations.bulkCreate(organizationsArray);
This link has the complete controller code, I believe it will facilitate understanding.
When !OrganizationExists is true, I am getting the return that the organization does not exist. The problem is when the organization exists, I am getting the following message error.
The Array.map() is returning an array of promises that you can resolve to an array using Promise.all(). Inside the map you should use throw new Error() to break out of the map - this error will be raised by Promise.all() and you can then catch it and return an error to the client (or swallow it, etc).
This is a corrected version of your pattern, resolving the Promise results.
const { organizations } = req.body;
try {
// use Promise.all to resolve the promises returned by the async callback function
const organizationsArray = await Promise.all(
// this will return an array of promises
organizations.map(async (organization) => {
const organizationExists = await Organization.findByPk(organization, {
attributes: ['id'], // we only need the ID
raw: true, // don't need Instances
});
if (!organizationExists) {
// don't send response inside the map, throw an Error to break out
throw new Error(`Organization ${organization} does not exists!`);
}
// it does exist so return/resolve the value for the promise
return {
course_id: id,
organization_id: organization,
};
})
);
// if we get here there were no errors, create the records
await CoursesOrganizations.bulkCreate(organizationsArray);
// return a success to the client
return res.json({ success: true });
} catch (err) {
// there was an error, return it to the client
return res.status(400).json({ error: err.message });
}
This is a refactored version that will be a bit faster by fetching all the Organizations in one query and then doing the checks/creating the Course inserts.
const { Op } = Sequelize;
const { organizations } = req.body;
try {
// get all Organization matches for the IDs
const organizationsArray = await Organization.findAll({
attributes: ['id'], // we only need the ID
where: {
id: {
[Op.in]: organizations, // WHERE id IN (organizations)
}
},
raw: true, // no need to create Instances
});
// create an array of the IDs we found
const foundIds = organizationsArray.map((org) => org.id);
// check to see if any of the IDs are missing from the results
if (foundIds.length !== organizations.length) {
// Use Array.reduce() to figure out which IDs are missing from the results
const missingIds = organizations.reduce((missingIds, orgId) => {
if (!foundIds.includes(orgId)){
missingIds.push(orgId);
}
return missingIds;
}, []); // initialized to empty array
throw new Error(`Unable to find Organization for: ${missingIds.join(', ')}`);
}
// now create an array of courses to create using the foundIds
const courses = foundIds.map((orgId) => {
return {
course_id: id,
organization_id: orgId,
};
});
// if we get here there were no errors, create the records
await CoursesOrganizations.bulkCreate(courses);
// return a success to the client
return res.json({ success: true });
} catch (err) {
// there was an error, return it to the client
return res.status(400).json({ error: err.message });
}
If you have an array of Ids and you want to check if they exist you should you use the (in) operator, this makes it so that you are hitting the DB only once and getting all the records in one hit (instead of getting them one by one in a loop), after you get these records you can check their lengths to determine if they all exist or not.
const { Op } = require("sequelize");
let foundOrgs = await Organization.findAll({
where: {
id: {
[Op.in]: organizationsArray,
}
}
});

Best practise to combine multiple rest calls to populate 1 graphQL type in apollo-server

I have graphql User type that needs information from multiple REST api's and different servers.
Basic example: get the user firstname from rest domain 1 and get lastname from rest domain 2. Both rest domain have a common "userID" attribute.
A simplefied example of my resolver code atm:
user: async (_source, args, { dataSources }) => {
try {
const datasource1 = await dataSources.RESTAPI1.getUser(args.id);
const datasource2 = await dataSources.RESTAPI2.getUser(args.id);
return { ...datasource1, ...datasource2 };
} catch (error) {
console.log("An error occurred.", error);
}
return [];
}
This works fine for this simplefied version, but I have 2 problems with this solution:
first, IRL there is a lot of logic going into merging the 2 json results. Since some field are shared but have different data (or are empty). So it's like cherry picking both results to create a combined result.
My second problem is that this is still a waterfall method. First get the data from restapi1, when thats done call restapi2. Basicly apollo-server is reintroducing rest-waterfall-fetch graphql tries to solve.
Keeping these 2 problems in mind.. Can I optimise this piece of code or rewrite is for better performance or readability? Or are there any packages that might help with this behavior?
Many thanks!
With regard to performance, if the two calls are independent of one another, you can utilize Promise.all to execute them in parallel:
const [dataSource1,dataSource2] = await Promise.all([
dataSources.RESTAPI1.getUser(args.id),
dataSources.RESTAPI2.getUser(args.id),
])
We normally let GraphQL's default resolver logic do the heavy lifting, but if you're finding that you need to "cherry pick" the data from both calls, you can return something like this in your root resolver:
return { dataSource1, dataSource2 }
and then write resolvers for each field:
const resolvers = {
User: {
someField: ({ dataSource1, dataSource2 }) => {
return dataSource1.a || dataSource2.b
},
someOtherField: ({ dataSource1, dataSource2 }) => {
return someCondition ? dataSource1.foo : dataSource2.bar
},
}
}
Assuming your user resolver returns type User forsake...
type User {
id: ID!
datasource1: RandomType
datasource1: RandomType
}
You can create individual resolvers for each field in type User, this can reduce the complexity of the user Query, to only the requested fields.
query {
user {
id
datasource1 {
...
}
}
}
const resolvers = {
Query: {
user: () => {
return { id: "..." };
}
},
User: {
datasource1: () => { ... },
datasource2: () => { ... } // i wont execute
}
};
datasource1 & datasource2 resolvers will only execute in parallel, after Query.user executes.
For parallel call.
const users = async (_source, args, { dataSources }) => {
try {
const promises = [
dataSources.RESTAPI1,
dataSources.RESTAPI2
].map(({ getUser }) => getUser(args.id));
const data = await Promise.all(promises);
return Object.assign({}, ...data);
} catch (error) {
console.log("An error occurred.", error);
}
return [];
};

How to fix problems with Promise arguments types in Node.js application?

In Node.js application I make 2 sql queries by sequelize library. Then I combine result of both requests in Promise.
router.post('/notification', function(request, response) {
const survey_id = request.query.survey_id;
const employees = sequelize.query('SELECT ARRAY_AGG (EMPLOYEE) AS EMPLOYEES FROM SURVEYS_EMPLOYEES_RELATIONSHIP WHERE SURVEY_ID = :survey_id AND STATUS = FALSE', {
replacements: {
survey_id: survey_id,
},
type: sequelize.QueryTypes.SELECT,
});
const templates = sequelize.query('SELECT TEMPLATE FROM TEMPLATES WHERE ID = 1', {
type: sequelize.QueryTypes.SELECT,
});
Promise.all([employees, templates]).then(responses => {
console.log(responses[0][0]["employees"]);
console.log(responses[1][0]["template"]);
}).catch(error => {
console.log(error);
});
});
I have such warning when run application:
Argument type [Promise, Promise] is not assignable to parameter type [(PromiseLike<T>|T), ...]
How I can fix this warning? It seems like something wrong with this part of code:
.all([employees, templates]).

Graphql error on subfields with graphql-yoga

I'm trying to query a graphql API via a proxy of another graphql API and am receiving an error. I'm using graphql yoga for my server and connecting to another graphql API from a CMS. Here is my code:
server.js
const { GraphQLServer } = require('graphql-yoga');
const Prismic = require('./prismic.config.js');
const gql = require('graphql-tag');
const typeDefs = `
type Query {
page(lang: String, uid: String): Page
}
type Page {
page_title: [TextField]
}
type TextField {
text: String
}
`
const resolvers = {
Query: {
page: (parent, args, context, info) => {
const query = gql`${context.request.body.query}`;
const result = context.Prismic.query({
query,
variables: { ...args }
})
.then(resp => {
return resp.data.page;
})
.catch(err => console.log(err));
return result;
}
}
}
const server = new GraphQLServer({
typeDefs,
resolvers,
context: req => ({ ...req, Prismic })
})
server.start(() => console.log('Server is running on localhost:4000'))
Here is my query below from graphql playground that comes with Graphql Yoga:
query {
page(lang: "en-gb", uid: "homepage") {
page_title {
text
}
}
}
The error i'm receiving is:
'Query does not pass validation. Violations:\n\nField \'page_title\'
of type \'Json\' must not have a sub selection. (line 3, column 5):\n
page_title {\n ^' } },
The strange thing is I can get a valid response if I hardcode the query without the nested text field as the error suggests on the server like so:
// const query = gql`${context.request.body.query}`;
const query = gql`
query($uid: String!) {
page(lang: "en-gb", uid: $uid) {
page_title
}
}
`;
Attempting to modify my query in graphql playground to not include the nested text field like so:
query {
page(lang: "en-gb", uid: "homepage") {
page_title
}
}
Gives my the following error and does not allow me to make the request at all:
field "page_title" of type "[TextField]" must have a selection of
subfields. Did you mean "page_title { ... }"?
The error suggests that I need to add the nested subfield of text which is intended but when I use this query instead of the hardcoded one on the server it gives me the error mentioned before.
Not sure if i've gone wrong somewhere in my setup?
Thanks
In your GraphQL schema
page_title: [TextField] is not one of the Scalar Types
As a result, during making a query you need to define what exactly fields you need to fetch?
And your fields in the query should be expanded to the level of having only scalar types, so GraphQL will know how to resolve your query.
So this is the only query that should be from the first level (from graphql playground that comes with Graphql Yoga) :
query {
page(lang: "en-gb", uid: "homepage") {
page_title {
text
}
}
}
But the error from the server throws from your approach to make graphql query inside graphql resolver:
const result = context.Prismic.query({
query,
variables: { ...args }
})
So I'm 100% sure that the page_title in Prismic has the custom scalar - JSON. As a result, you can't use the same query for this request.

Using dataloader for resolvers with nested data from ArangoDB

I'm implementing a GraphQL API over ArangoDB (with arangojs) and I want to know how to best implement dataloader (or similar) for this very basic use case.
I have 2 resolvers with DB queries shown below (both of these work), the first fetches Persons, the 2nd fetches a list of Record objects associated with a given Person (one to many). The association is made using ArangoDB's edge collections.
import { Database, aql } from 'arangojs'
import pick from 'lodash/pick'
const db = new Database('http://127.0.0.1:8529')
db.useBasicAuth('root', '')
db.useDatabase('_system')
// id is the auto-generated userId, which `_key` in Arango
const fetchPerson = id=> async (resolve, reject)=> {
try {
const cursor = await db.query(aql`RETURN DOCUMENT("PersonTable", ${String(id)})`)
// Unwrap the results from the cursor object
const result = await cursor.next()
return resolve( pick(result, ['_key', 'firstName', 'lastName']) )
} catch (err) {
return reject( err )
}
}
// id is the auto-generated userId (`_key` in Arango) who is associated with the records via the Person_HasMany_Records edge collection
const fetchRecords = id=> async (resolve, reject)=> {
try {
const edgeCollection = await db.collection('Person_HasMany_Records')
// Query simply says: `get all connected nodes 1 step outward from origin node, in edgeCollection`
const cursor = await db.query(aql`
FOR record IN 1..1
OUTBOUND DOCUMENT("PersonTable", ${String(id)})
${edgeCollection}
RETURN record`)
return resolve( cursor.map(each=>
pick(each, ['_key', 'intro', 'title', 'misc']))
)
} catch (err) {
return reject( err )
}
}
export default {
Query: {
getPerson: (_, { id })=> new Promise(fetchPerson(id)),
getRecords: (_, { ownerId })=> new Promise(fetchRecords(ownerId)),
}
}
Now, if I want to fetch the Person data with the Records as nested data, in a single request, the query would be this:
aql`
LET person = DOCUMENT("PersonTable", ${String(id)})
LET records = (
FOR record IN 1..1
OUTBOUND person
${edgeCollection}
RETURN record
)
RETURN MERGE(person, { records: records })`
So how should I update my API to employ batch requests / caching? Can I somehow run fetchRecords(id) inside of fetchPerson(id) but only when fetchPerson(id) is invoked with the records property included?
The setup file here, notice I'm using graphql-tools, because I took this from a tutorial somewhere.
import http from 'http'
import db from './database'
import schema from './schema'
import resolvers from './resolvers'
import express from 'express'
import bodyParser from 'body-parser'
import { graphqlExpress, graphiqlExpress } from 'apollo-server-express'
import { makeExecutableSchema } from 'graphql-tools'
const app = express()
// bodyParser is needed just for POST.
app.use('/graphql', bodyParser.json(), graphqlExpress({
schema: makeExecutableSchema({ typeDefs: schema, resolvers })
}))
app.get('/graphiql', graphiqlExpress({ endpointURL: '/graphql' })) // if you want GraphiQL enabled
app.listen(3000)
And here's the schema.
export default `
type Person {
_key: String!
firstName: String!
lastName: String!
}
type Records {
_key: String!
intro: String!
title: String!
misc: String!
}
type Query {
getPerson(id: Int!): Person
getRecords(ownerId: Int!): [Record]!
}
type Schema {
query: Query
}
`
So, the real benefit of dataloader is that it stops you from doing n+1 queries. Meaning for example, if in your schema, Person had a field records, and then you asked for the first 10 people's 10 records. In a naive gql schema, that would cause 11 requests to be fired: 1 for the first 10 people, and then one for each of their records.
With dataloader implemented, you cut that down to two requests: one for the first 10 people, and then one for all of the records of the first ten people.
With your schema above, it doesn't seem that you can benefit in any way from dataloader, since there's no possibility of n+1 queries. The only benefit you might get is caching if you make multiple requests for the same person or records within a single request (which again, isn't possible based on your schema design unless you are using batched queries).
Let's say you want the caching though. Then you could do something like this:
// loaders.js
// The callback functions take a list of keys and return a list of values to
// hydrate those keys, in order, with `null` for any value that cannot be hydrated
export default {
personLoader: new DataLoader(loadBatchedPersons),
personRecordsLoader: new DataLoader(loadBatchedPersonRecords),
};
You then want to attach the loaders to your context for easy sharing. Modified example from Apollo docs:
// app.js
import loaders from './loaders';
app.use(
'/graphql',
bodyParser.json(),
graphqlExpress(req => {
return {
schema: myGraphQLSchema,
context: {
loaders,
},
};
}),
);
Then, you can use them from the context in your resolvers:
// ViewerType.js:
// Some parent type, such as `viewer` often
{
person: {
type: PersonType,
resolve: async (viewer, args, context, info) => context.loaders.personLoader,
},
records: {
type: new GraphQLList(RecordType), // This could also be a connection
resolve: async (viewer, args, context, info) => context.loaders.personRecordsLoader;
},
}
I guess I was confused about the capability of dataloader. Serving nested data was really the stumbling block for me.
This is the missing code. The export from resolvers.js needed a person property,
export default {
Person: {
records: (person)=> new Promise(fetchRecords(person._key)),
},
Query: {
getPerson: (_, { id })=> new Promise(fetchPerson(id)),
getRecords: (_, { ownerId })=> new Promise(fetchRecords(ownerId)),
},
}
And the Person type in the schema needed a records property.
type Person {
_key: String!
firstName: String!
lastName: String!
records: [Records]!
}
Seems these features are provided by Apollo graphql-tools.

Categories

Resources