Support for `{where: 'raw query'}` has been removed - javascript

I'm running a GraphQL server using the serverless framework on AWS Lambda.
I'm fetching the data in the UI using apollo-link-batch-http.
If I run it locally using serverless-offline, it works fine. But if I run it on AWS Lambda, it successfully resolves the fooResolver but not the barResolver as it throws the above error message.
The Model.cached(300) is a tiny cache wrapper I made. You can see it here:
https://gist.github.com/lookapanda/4676083186849bb6c5ae6f6230ad7d8f
It basically just makes me able to use my own findById function and so on.
The weird thing is, this error only appears, if I use apollo-link-batch-http but not if I use apollo-link-http. So if the request is batched into a single GraphQL request, there is no such errors (although, then I get this error: https://github.com/sequelize/sequelize/issues/9242)
I really don't know what is going on there, there is no raw where query in any of those resolvers. And it gets even weirder: It only happens with the cached result. The first request is totally valid and successful, but then every consecutive request fails with the above error message.
I really hope someone can help me, I'm getting insane :D
export const fooResolver = async () => {
const Model = db.getDB().sequelize.models.fooModel;
const data = await Model.cached(300).findAll({
where: {
time: {
[Op.gt]: Model.sequelize.literal('CURRENT_TIMESTAMP()'),
},
enabled: true,
state: 'PLANNED',
},
order: [['time', 'DESC']],
limit: 5,
});
return data.value;
};
export const barResolver = async () => {
const models = db.getDB().sequelize.models;
const Model = models.fooModel;
const data = await Model.findById(data.id, {
include: [
{
model: models.barModel,
include: [
{
association: 'fooAssociation',
include: [{ association: 'barAssociation' }],
order: ['showOrder', 'ASC'],
},
],
},
],
});
return {
data,
};
};

I faced similar situation, except in my case using the code below works well:
.findAll({
where: {
title: req.params.title
}
})

Okay, so after tedious debugging I found out that in the cacheable wrapper I was using this snippet:
https://github.com/sequelize/sequelize/issues/2325#issuecomment-366060303
I don't really know still, why exactly this error only showed up on Lambda and not locally, but it stopped erroring when I only used the selectQuery() method and only returned that instead of the whole Model.addHook stuff and so on. So basically changed this
export const getSqlFromSelect = (Model, method, args) => {
if (!SUPPORTED_SELECT_METHODS.includes(method)) {
throw new Error('Unsupported method.');
}
const id = generateRandomHash(10);
return new Promise((resolve, reject) => {
Model.addHook('beforeFindAfterOptions', id, options, => {
Model.removeHook('beforeFindAfterOptions', id);
resolve(
Model.sequelize.dialect.QueryGenerator.selectQuery(
Model.getTableName(),
options,
Model
).slice(0, -1)
);
});
return Model[method](...args).catch(reject);
});
};
to this
export const getSqlFromSelect = (Model, identifier, options) => {
if (typeof identifier === 'number' || typeof identifier === 'string' || Buffer.isBuffer(identifier) {
options.where = {
[Model.primaryKeyAttribute]: identifier,
};
};
return Model.sequelize.dialect.QueryGenerator.selectQuery(
Model.getTableName(),
options,
Model
).slice(0, -1);
};

Related

How can I connect twice to db in Cypress tests?

I need to connect to my db (Postgres) twice during autotest - at the beginning to truncate and at the end to select a new note from the table.
I tried to do it using pg-promise, but in the second connection it returns null or undefined (the following error shows after asserting expected and real results - "Target cannot be null or undefined.").
If I don't execute the first connecting (for truncating), the second runs normally and returns new note from table.
Also, if I make Truncate after Select, it gives two different result, depending of whether the table empty or not.
If it is empty, the result is the same error. But if there is some record initially, all becomes ok and test finishes without error.
This is how I make connecting and disconnecting to db:
const pgp = require('pg-promise')();
const postgresConfig = require(require('path').resolve('cypress.json'));
function dbConnection(query, userDefineConnection) {
const db = pgp(userDefineConnection || postgresConfig.db);
return db.any(query).finally(db.$pool.end)
// return db.any(query).finally(pgp.end)
}
/**#type {Cypress.PluginConfig} */
module.exports = (on, config) => {
on("task", {
dbQuery: (query) => dbConnection(query.query, query.connection)
});
}
And this is how I make requests to DB in test:
describe('example to-do app', () => {
beforeEach(() => {
cy.task('dbQuery', {'query': 'TRUNCATE TABLE auto_db.public.requests RESTART IDENTITY CASCADE'})
})
it('tablist displays class', () => {
cy.contains('button > span', 'Save').click()
cy.task('dbQuery', {'query': 'SELECT * FROM auto_db.public.requests'})
.then(queryResponse => {
expect(queryResponse[0]).to.deep.contain({
id: 1,
author_id: 4,
type_id: 1,
})
})
})
})
If you look at the implementation for cypress-postgres it's similar but they break up the response/return to fit the call to pgp.end() in between.
Since it sounds like the 1st connection isn't closing, I'd suspect the .finally() call isn't working.
const pgp = require('pg-promise')();
const postgresConfig = require(require('path').resolve('cypress.json'));
function dbConnection(query, userDefineConnection) {
const db = pgp(userDefineConnection || postgresConfig.db);
let response = db.any(query)
pgp.end()
return response
}
/**#type {Cypress.PluginConfig} */
module.exports = (on, config) => {
on("task", {
dbQuery: (query) => dbConnection(query.query, query.connection)
});
}
You should be able to get rid of the postgresConfig line since (on, config) is the same (in fact it's better because you might want to overwrite some config on the command line).
const pgp = require('pg-promise')();
/**#type {Cypress.PluginConfig} */
module.exports = (on, config) => {
function dbConnection(query, userDefineConnection) {
const db = pgp(userDefineConnection || config.db);
let response = db.any(query)
pgp.end()
return response
}
on("task", {
dbQuery: (query) => dbConnection(query.query, query.connection)
});
}
The problem was that new record in database has been creating not right away and cypress just was looking in db earlier that creating.
Adding of waiting before db checking helped.

Loop from multiple airtable bases in a Next JS page

I think this more of a general async/await loop question, but I'm trying to do it within the bounds of an Airtable API request and within getStaticProps of Next.js so I thought that is important to share.
What I want to do is create an array of base IDs like ["appBaseId01", "appBaseId02", "appBaseId03"] and output the contents of a page. I have it working with 1 base, but am failing at getting it for multiple.
Below is the code for one static base, if anyone can help me grok how I'd want to loop over these. My gut says that I need to await each uniquely and then pop them into an array, but I'm not sure.
const records = await airtable
.base("appBaseId01")("Case Overview Information")
.select()
.firstPage();
const details = records.map((detail) => {
return {
city: detail.get("City") || null,
name: detail.get("Name") || null,
state: detail.get("State") || null,
};
});
return {
props: {
details,
},
};
EDIT
I've gotten closer to emulating it, but haven't figured out how to loop the initial requests yet.
This yields me an array of arrays that I can at least work with, but it's janky and unsustainable.
export async function getStaticProps() {
const caseOneRecords = await setOverviewBase("appBaseId01")
.select({})
.firstPage();
const caseTwoRecords = await setOverviewBase("appBaseId02")
.select({})
.firstPage();
const cases = [];
cases.push(minifyOverviewRecords(caseOneRecords));
cases.push(minifyOverviewRecords(caseTwoRecords));
return {
props: {
cases,
},
};
}
setOverviewBase is a helper that establishes the Airtable connection and sets the table name.
const setOverviewBase = (baseId) =>
base.base(baseId)("Case Overview Information");
You can map the array of base IDs and await with Promise.all. Assuming you have getFirstPage and minifyOverviewRecords defined as below, you could do the following:
const getFirstPage = (baseId) =>
airtable
.base(baseId)("Case Overview Information")
.select({})
.firstPage();
const minifyOverviewRecords = (records) =>
records.map((detail) => {
return {
city: detail.get("City") || null,
name: detail.get("Name") || null,
state: detail.get("State") || null,
};
});
export async function getStaticProps() {
const cases = await Promise.all(
["appBaseId01", "appBaseId02", "appBaseId03"].map(async (baseId) => {
const firstPage = await getFirstPage(baseId);
return minifyOverviewRecords(firstPage);
})
);
return {
props: {
cases
}
};
}

How to use dataloader?

Im trying to figure this out.
I want to get all my users from my database, cache them
and then when making a new request I want to get those that Ive cached + new ones that have been created.
So far:
const batchUsers = async ({ user }) => {
const users = await user.findAll({});
return users;
};
const apolloServer = new ApolloServer({
schema,
playground: true,
context: {
userLoader: new DataLoader(() => batchUsers(db)),// not sending keys since Im after all users
},
});
my resolver:
users: async (obj, args, context, info) => {
return context.userLoader.load();
}
load method requiers a parameter but in this case I dont want to have a specific user I want all of them.
I dont understand how to implement this can someone please explain.
If you're trying to just load all records, then there's not much of a point in utilizing DataLoader to begin in. The purpose behind DataLoader is to batch multiple calls like load(7) and load(22) into a single call that's then executed against your data source. If you need to get all users, then you should just call user.findAll directly.
Also, if you do end up using DataLoader, make sure you pass in a function, not an object as your context. The function will be ran on each request, which will ensure you're using a fresh instance of DataLoader instead of one with a stale cache.
context: () => ({
userLoader: new DataLoader(async (ids) => {
const users = await User.findAll({
where: { id: ids }
})
// Note that we need to map over the original ids instead of
// just returning the results of User.findAll because the
// length of the returned array needs to match the length of the ids
return ids.map(id => users.find(user => user.id === id) || null)
}),
}),
Note that you could also return an instance of an error instead of null inside the array if you want load to reject.
Took me a while but I got this working:
const batchUsers = async (keys, { user }) => {
const users = await user.findAll({
raw: true,
where: {
Id: {
// #ts-ignore
// eslint-disable-next-line no-undef
[op.in]: keys,
},
},
});
const gs = _.groupBy(users, 'Id');
return keys.map(k => gs[k] || []);
};
const apolloServer = new ApolloServer({
schema,
playground: true,
context: () => ({
userLoader: new DataLoader(keys => batchUsers(keys, db)),
}),
});
resolver:
user: {
myUsers: ({ Id }, args, { userLoader }) => {
return userLoader.load(Id);
},
},
playground:
{users
{Id
myUsers
{Id}}
}
playground explained:
users basically fetches all users and then myusers does the same thing by inhereting the id from the first call.
I think I choose a horrible example here since I did not see any gains in performence by this. I did see however that the query turned into:
SELECT ... FROM User WhERE ID IN(...)

Best practise to combine multiple rest calls to populate 1 graphQL type in apollo-server

I have graphql User type that needs information from multiple REST api's and different servers.
Basic example: get the user firstname from rest domain 1 and get lastname from rest domain 2. Both rest domain have a common "userID" attribute.
A simplefied example of my resolver code atm:
user: async (_source, args, { dataSources }) => {
try {
const datasource1 = await dataSources.RESTAPI1.getUser(args.id);
const datasource2 = await dataSources.RESTAPI2.getUser(args.id);
return { ...datasource1, ...datasource2 };
} catch (error) {
console.log("An error occurred.", error);
}
return [];
}
This works fine for this simplefied version, but I have 2 problems with this solution:
first, IRL there is a lot of logic going into merging the 2 json results. Since some field are shared but have different data (or are empty). So it's like cherry picking both results to create a combined result.
My second problem is that this is still a waterfall method. First get the data from restapi1, when thats done call restapi2. Basicly apollo-server is reintroducing rest-waterfall-fetch graphql tries to solve.
Keeping these 2 problems in mind.. Can I optimise this piece of code or rewrite is for better performance or readability? Or are there any packages that might help with this behavior?
Many thanks!
With regard to performance, if the two calls are independent of one another, you can utilize Promise.all to execute them in parallel:
const [dataSource1,dataSource2] = await Promise.all([
dataSources.RESTAPI1.getUser(args.id),
dataSources.RESTAPI2.getUser(args.id),
])
We normally let GraphQL's default resolver logic do the heavy lifting, but if you're finding that you need to "cherry pick" the data from both calls, you can return something like this in your root resolver:
return { dataSource1, dataSource2 }
and then write resolvers for each field:
const resolvers = {
User: {
someField: ({ dataSource1, dataSource2 }) => {
return dataSource1.a || dataSource2.b
},
someOtherField: ({ dataSource1, dataSource2 }) => {
return someCondition ? dataSource1.foo : dataSource2.bar
},
}
}
Assuming your user resolver returns type User forsake...
type User {
id: ID!
datasource1: RandomType
datasource1: RandomType
}
You can create individual resolvers for each field in type User, this can reduce the complexity of the user Query, to only the requested fields.
query {
user {
id
datasource1 {
...
}
}
}
const resolvers = {
Query: {
user: () => {
return { id: "..." };
}
},
User: {
datasource1: () => { ... },
datasource2: () => { ... } // i wont execute
}
};
datasource1 & datasource2 resolvers will only execute in parallel, after Query.user executes.
For parallel call.
const users = async (_source, args, { dataSources }) => {
try {
const promises = [
dataSources.RESTAPI1,
dataSources.RESTAPI2
].map(({ getUser }) => getUser(args.id));
const data = await Promise.all(promises);
return Object.assign({}, ...data);
} catch (error) {
console.log("An error occurred.", error);
}
return [];
};

Generator syntax with 'flow' in MobX State Tree

I have three actions in a MobX State Tree store: the first fetches data form the API, the second sends a POST request using the data from the API in the database, and the third takes the response and saves it to the store.
The store is comprised simply of a map of these data structures called Lists:
export const ListStore = types
.model('ListStore', {
lists: types.map(List),
})
The first two actions to send the GET and POST requests work fine:
.actions((self) => ({
fetchTitles: flow(function* fetchTitles(params: Params) {
const env = getStoreEnv(self);
const { clients } = env;
const browseParams: BrowseParams = {
category: 'movies',
imdb_score_min: params.filter.imdbFilterScore,
};
let browseResult;
try {
browseResult = yield clients.browseClient.fetch(browseParams);
} catch (error) {
console.error('Failed to fetch titles', error);
}
return browseResult.data.results.map((title) => title.uuid);
}),
}))
.actions((self) => ({
postList: flow(function* postList(params: Params) {
const env = getStoreEnv(self);
const { clients } = env;
const titles = yield self.fetchTitles(params);
return clients.listClient.create({
name: params.name,
titles,
invites: params.invites,
filter: params.filter,
});
}),
}))
But when it comes to the third action, actually saving the List to the ListStore, no such luck. I've tried quite a few variations, but simply none of them work. Honestly I am not too familiar with generator syntax, and I even tried doing it without the generator. Here you can see my attempts:
createList: flow(function* createList(params: Params) {
const env = getStoreEnv(self);
const list = yield self.postList(params);
console.log('list in createList', list.data);
return self.lists.put(List.create({ ...list.data }, env));
// return self.lists.set(list.id, list.data);
}),
createList: flow(function* createList(params: Params) {
const list = yield self.postList(params);
console.log('list in createList', list.data);
yield self.lists.set(list.id, list.data);
}),
createList(params: Params) {
return self.postList(params).then((list) => {
console.log('list in createList', list.data);
self.lists.set(list.id, list.data);
});
},
createList: flow(function* createList(params: Params) {
yield self.postList(params).then((list) => {
console.log('list in createList', list.data);
return self.lists.set(list.id, list.data);
});
}),
I've tried with both .set() and .put(), but to no avail. I've also tried using yield and return...nothing seems to work. The data logged in console.log('list in createList', list.data); looks correct and matches the model (and if it didn't, wouldn't I get an error saying so?). No errors are logged to the console, it just silently fails.
If you can spot the error and see how this should be written, I will be extremely grateful. Thank you!
It turns out that the issue was not with the syntax: as you can see in the first comment by the maintainer of MST, the second version is correct.
The problem lies in the way the model (not shown here) was created.

Categories

Resources