Apollo GraphQL Nested Mutation - javascript

need some help on nested mutations.
Abstracted scenario is this:
I want to combine 2 mutation calls on apollo-server to first create a a Customer then create an Address for that customer. The Address mutation needs a customerID to be able to do this but also has information from the original overall mutation that it needs access to.
Here's the generic code:
makeExecutableSchema({
typeDefs: gql`
type Mutation {
createCustomerWithAddress(customer: CustomerRequest!, address: AddressRequest!): Response
}
input CustomerRequest {
name: String!
}
input AddressRequest {
address: String!
city: String!
state: String!
country: String!
}
type Response {
customerID: Int!
addressID: Int!
}
`,
resolvers: {
Mutation: {
createCustomerWithAddress: async (_, {customer}, context, info) => {
return await api.someAsyncCall(customer);
}
},
Response: {
addressID: async(customerID) => {
// how do we get AddressRequest here?
return await api.someAsyncCall(customerID, address);
}
}
}
})
There's a lot of complexity I'm not showing from the original code, but what I wanted to get at is just at the root of how to access request params via sub mutations, if even possible. I don't really want to pass down address from the top mutation to the sub mutation.

You don't need a Response field in resolvers. createCustomerWithAddress should return an object shaped like Response.
resolvers: {
Mutation: {
createCustomerWithAddress: async (_, {customer, address}, context, info) => {
// create customer
const customerId = await api.CreateCustomer(customer);
// create address and assign customerId
const addressId = await api.CreateAddress({ ...address, customerId });
// return response
return { customerId, addressId };
}
},
}

Related

Get populated data from Mongoose to the client

On the server, I am populating user-data and when I am printing it to the console everything is working fine but I am not able to access the data on the client or even on Playground of GraphQL.
This is my Schema
const { model, Schema } = require("mongoose");
const postSchema = new Schema({
body: String,
user: {
type: Schema.Types.ObjectId,
ref: "User",
},
});
module.exports = model("Post", postSchema);
const userSchema = new Schema({
username: String,
});
module.exports = model("User", userSchema);
const { gql } = require("apollo-server");
module.exports = gql`
type Post {
id: ID!
body: String!
user: [User]!
}
type User {
id: ID!
username: String!
}
type Query {
getPosts: [Post]!
getPost(postId: ID!): Post!
}
`;
Query: {
async getPosts() {
try {
const posts = await Post.find()
.populate("user");
console.log("posts: ", posts[0]);
// This works and returns the populated user with the username
return posts;
} catch (err) {
throw new Error(err);
}
},
}
But on the client or even in Playground, I can't access the populated data.
query getPosts {
getPosts{
body
user {
username
}
}
}
My question is how to access the data from the client.
Thanks for your help.
you are using this feature in the wrong way you should defined a Object in your resolvers with your model name and that object should contain a method that send the realated user by the parant value.
here is a full document from apollo server docs for how to use this feature
use lean() like this :
const posts = await Post.find().populate("user").lean();

Does Apollo cache the returned data from a mutation

I'm using Apollo Client in a React app and I need to do a mutation then keep the returned data for later use (but I won't have access to the variables ), do I have to use a another state management solution or can we do this in Apollo?
I've read about doing this with query but not mutation.
Here's my code so far
// Mutation
const [myMutation, { data, errors, loading }] = useMutation(MY_MUTATION, {
onCompleted({ myMutation }) {
console.log('myMutation: ', myMutation.dataToKeep);
if (myMutation && myMutation.dataToKeep)
SetResponse(myMutation.dataToKeep);
},
onError(error) {
console.error('error: ', error);
},
});
//How I call it
onClick={() => {
myMutation({
variables: {
input: {
phoneNumber: '0000000000',
id: '0000',
},
},
});
}}
edit:
here is the mutation
export const MY_MUTATION = gql`
mutation MyMutation($input: MyMutationInput!) {
myMutation(input: $input) {
dataToKeep
expiresAt
}
}
`;
and the schema for this mutation
MyMutationInput:
phoneNumber: String!
id: String!
MyMutationPayload:
dataToKeep
expiresAt
Case 1: Payload is using common entities
Simply put, the Apollo client's cache keeps everything that's received from queries and mutations, though the schema needs to include id: ID! fields and any query needs to use both the id and __typename fields on relevant nodes for the client to know which part of the cache to update.
This assumes that the mutation payload is common data from the schema that can be retrieved through a normal query. This is the best case scenario.
Given the following schema on the server:
type User {
id: ID!
phoneNumber: String!
}
type Query {
user(id: String!): User!
}
type UpdateUserPayload {
user: User!
}
type Mutation {
updateUser(id: String!, phoneNumber: String!): UpdateUserPayload!
}
And assuming a cache is used on the client:
import { InMemoryCache, ApolloClient } from '#apollo/client';
const client = new ApolloClient({
// ...other arguments...
cache: new InMemoryCache(options)
});
The cache generates a unique ID for every identifiable object included in the response.
The cache stores the objects by ID in a flat lookup table.
Whenever an incoming object is stored with the same ID as an existing object, the fields of those objects are merged.
If the incoming object and the existing object share any fields, the incoming object overwrites the cached values for those fields.
Fields that appear in only the existing object or only the incoming object are preserved.
Normalization constructs a partial copy of your data graph on your
client, in a format that's optimized for reading and updating the
graph as your application changes state.
The client's mutation should be
mutation UpdateUserPhone($phoneNumber: String!, $id: String!) {
updateUser(id: $id, phoneNumber: $phoneNumber) {
user {
__typename # Added by default by the Apollo client
id # Required to identify the user in the cache
phoneNumber # Field that'll be updated in the cache
}
}
}
Then, any component using this user through the same Apollo client in the app will be up to date automatically. There's nothing special to do, the client will use the cache by default and trigger renders whenever the data changes.
import { gql, useQuery } from '#apollo/client';
const USER_QUERY = gql`
query GetUser($id: String!) {
user(id: $id) {
__typename
id
phoneNumber
}
}
`;
const UserComponent = ({ userId }) => {
const { loading, error, data } = useQuery(USER_QUERY, {
variables: { id: userId },
});
if (loading) return null;
if (error) return `Error! ${error}`;
return <div>{data.user.phoneNumber}</div>;
}
The fetchPolicy option defaults to cache-first.
Case 2: Payload is custom data specific to the mutation
If the data is in fact not available elsewhere in the schema, it won't be possible to use the Apollo cache automatically as explained above.
Use another state management solution
A couple options:
local storage
React's context API
etc.
Here's an example from the Apollo GraphQL documentation using the localStorage:
const [login, { loading, error }] = useMutation(LOGIN_USER, {
onCompleted({ login }) {
localStorage.setItem('token', login.token);
localStorage.setItem('userId', login.id);
}
});
Define a client-side schema
This is a pure Apollo GraphQL solution since the client is also a state management library, which enables useful developer tooling and helps reason about the data.
Create a local schema.
// schema.js
export const typeDefs = gql`
type DataToKeep {
# anything here
}
extend type Query {
dataToKeep: DataToKeep # probably nullable?
}
`;
Initialize a custom cache
// cache.js
export const dataToKeepVar = makeVar(null);
export const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
dataToKeep: {
read() {
return dataToKeepVar();
}
},
}
}
}
});
Apply the schema override at the client's initialization
import { InMemoryCache, Reference, makeVar } from '#apollo/client';
import { cache } from './cache';
import { typeDefs } from './schema';
const client = new ApolloClient({
cache,
typeDefs,
// other options like, headers, uri, etc.
});
Keep track of the changes in the mutation:
const [myMutation, { data, errors, loading }] = useMutation(MY_MUTATION, {
onCompleted({ myMutation }) {
if (myMutation && myMutation.dataToKeep)
dataToKeepVar(myMutation.dataToKeep);
}
});
Then, query the #client field.
import { gql, useQuery } from '#apollo/client';
const DATA_QUERY = gql`
query dataToKeep {
dataToKeep #client {
# anything here
}
}
`;
const AnyComponent = ({ userId }) => {
const { loading, error, data } = useQuery(DATA_QUERY);
if (loading) return null;
if (error) return `Error! ${error}`;
return <div>{JSON.stringify(data.dataToKeep)}</div>;
}
See also the documentation on managing local state.

TypeORM Apollo nested query resolver

I have a schema (with the appropriate database tables and entity classes defined) like
type User {
id: Int!
phoneNumber: String!
}
type Event {
id: Int!
host: User
}
and I'm trying to use Apollo to write a query like
Query{
event(id:1){
host{
firstName
}
}
}
But I can't figure out how to get the Apollo library to resolve the User type in the host field to the hostId that is stored on the event object.
I modified the event to return the hostId field, and it works perfectly fine, but Graphql won't resolve the id to the appropriate user type. What am I missing?
edit: missing resolver code
event: async (parent: any, args: { id: number }) => {
const eventRepository = getConnection().getRepository(Event);
const event = await eventRepository.findOne(args.id);
return event;
},
I managed to get a working version by using findOne(args.id, { relations: ['host']}), but I don't like that because it seems like something that would be appropriate to delegate to graphql to handle.
Your resolver should be like that
const resolver = {
Query: {
event: async (_: any, args: any) => {
return await event.findOne(args.id);
}
},
event: {
host: async (parent: any, args: any, context: any) => {
return await user.find({ id: parent.id });
}
}
};

GraphQL relationship in Mongoose

I have the following GraphQL Schema
type User {
id: String!
name: String
username: String!
}
type Conversation {
id: String!
participants: [User]
}
type Query {
user(_id: String!): User
conversation(_id: String!): Conversation
}
My resolver for the conversation is as follows:
conversation: async (parent, args) => {
let conversation = await Conversation.findById(args._id);
conversation.id = conversation._id.toString();
return conversation;
}
The participants field will hold array of users ObjectId. What do I need to do in my resolver so I can fetch users data within the conversation call.
For example a call like this
query test($id:String!){
conversation(_id:$id){
id,
participants {
id,
username
}
}
}
You probably used Reference in your object model, therefore in order to fetch participants data, you should use mongoose populate!
This would work for you:
conversation: async (parent, args) => {
let conversation = await Conversation.findById(args._id).populate('participants');
conversation.id = conversation._id.toString();
return conversation;
}

Using dataloader for resolvers with nested data from ArangoDB

I'm implementing a GraphQL API over ArangoDB (with arangojs) and I want to know how to best implement dataloader (or similar) for this very basic use case.
I have 2 resolvers with DB queries shown below (both of these work), the first fetches Persons, the 2nd fetches a list of Record objects associated with a given Person (one to many). The association is made using ArangoDB's edge collections.
import { Database, aql } from 'arangojs'
import pick from 'lodash/pick'
const db = new Database('http://127.0.0.1:8529')
db.useBasicAuth('root', '')
db.useDatabase('_system')
// id is the auto-generated userId, which `_key` in Arango
const fetchPerson = id=> async (resolve, reject)=> {
try {
const cursor = await db.query(aql`RETURN DOCUMENT("PersonTable", ${String(id)})`)
// Unwrap the results from the cursor object
const result = await cursor.next()
return resolve( pick(result, ['_key', 'firstName', 'lastName']) )
} catch (err) {
return reject( err )
}
}
// id is the auto-generated userId (`_key` in Arango) who is associated with the records via the Person_HasMany_Records edge collection
const fetchRecords = id=> async (resolve, reject)=> {
try {
const edgeCollection = await db.collection('Person_HasMany_Records')
// Query simply says: `get all connected nodes 1 step outward from origin node, in edgeCollection`
const cursor = await db.query(aql`
FOR record IN 1..1
OUTBOUND DOCUMENT("PersonTable", ${String(id)})
${edgeCollection}
RETURN record`)
return resolve( cursor.map(each=>
pick(each, ['_key', 'intro', 'title', 'misc']))
)
} catch (err) {
return reject( err )
}
}
export default {
Query: {
getPerson: (_, { id })=> new Promise(fetchPerson(id)),
getRecords: (_, { ownerId })=> new Promise(fetchRecords(ownerId)),
}
}
Now, if I want to fetch the Person data with the Records as nested data, in a single request, the query would be this:
aql`
LET person = DOCUMENT("PersonTable", ${String(id)})
LET records = (
FOR record IN 1..1
OUTBOUND person
${edgeCollection}
RETURN record
)
RETURN MERGE(person, { records: records })`
So how should I update my API to employ batch requests / caching? Can I somehow run fetchRecords(id) inside of fetchPerson(id) but only when fetchPerson(id) is invoked with the records property included?
The setup file here, notice I'm using graphql-tools, because I took this from a tutorial somewhere.
import http from 'http'
import db from './database'
import schema from './schema'
import resolvers from './resolvers'
import express from 'express'
import bodyParser from 'body-parser'
import { graphqlExpress, graphiqlExpress } from 'apollo-server-express'
import { makeExecutableSchema } from 'graphql-tools'
const app = express()
// bodyParser is needed just for POST.
app.use('/graphql', bodyParser.json(), graphqlExpress({
schema: makeExecutableSchema({ typeDefs: schema, resolvers })
}))
app.get('/graphiql', graphiqlExpress({ endpointURL: '/graphql' })) // if you want GraphiQL enabled
app.listen(3000)
And here's the schema.
export default `
type Person {
_key: String!
firstName: String!
lastName: String!
}
type Records {
_key: String!
intro: String!
title: String!
misc: String!
}
type Query {
getPerson(id: Int!): Person
getRecords(ownerId: Int!): [Record]!
}
type Schema {
query: Query
}
`
So, the real benefit of dataloader is that it stops you from doing n+1 queries. Meaning for example, if in your schema, Person had a field records, and then you asked for the first 10 people's 10 records. In a naive gql schema, that would cause 11 requests to be fired: 1 for the first 10 people, and then one for each of their records.
With dataloader implemented, you cut that down to two requests: one for the first 10 people, and then one for all of the records of the first ten people.
With your schema above, it doesn't seem that you can benefit in any way from dataloader, since there's no possibility of n+1 queries. The only benefit you might get is caching if you make multiple requests for the same person or records within a single request (which again, isn't possible based on your schema design unless you are using batched queries).
Let's say you want the caching though. Then you could do something like this:
// loaders.js
// The callback functions take a list of keys and return a list of values to
// hydrate those keys, in order, with `null` for any value that cannot be hydrated
export default {
personLoader: new DataLoader(loadBatchedPersons),
personRecordsLoader: new DataLoader(loadBatchedPersonRecords),
};
You then want to attach the loaders to your context for easy sharing. Modified example from Apollo docs:
// app.js
import loaders from './loaders';
app.use(
'/graphql',
bodyParser.json(),
graphqlExpress(req => {
return {
schema: myGraphQLSchema,
context: {
loaders,
},
};
}),
);
Then, you can use them from the context in your resolvers:
// ViewerType.js:
// Some parent type, such as `viewer` often
{
person: {
type: PersonType,
resolve: async (viewer, args, context, info) => context.loaders.personLoader,
},
records: {
type: new GraphQLList(RecordType), // This could also be a connection
resolve: async (viewer, args, context, info) => context.loaders.personRecordsLoader;
},
}
I guess I was confused about the capability of dataloader. Serving nested data was really the stumbling block for me.
This is the missing code. The export from resolvers.js needed a person property,
export default {
Person: {
records: (person)=> new Promise(fetchRecords(person._key)),
},
Query: {
getPerson: (_, { id })=> new Promise(fetchPerson(id)),
getRecords: (_, { ownerId })=> new Promise(fetchRecords(ownerId)),
},
}
And the Person type in the schema needed a records property.
type Person {
_key: String!
firstName: String!
lastName: String!
records: [Records]!
}
Seems these features are provided by Apollo graphql-tools.

Categories

Resources