I want join and filter raw query
const projects = await sequelize.query('SELECT * FROM projects + SQL MAGIC', {
model: Projects,
mapToModel: true,
type: QueryTypes.SELECT,
});
In this query replace projects table with select+magic:
const dinamic_where = {id: 1}
const projects = await Projects.findAll(
where: { ..dinamic_where },
include: [{ model: Organization }],
)
So generated query shall become
SELECT fields,... FROM (SELECT * FROM projects + SQL MAGIC) JOIN organization WHERE organization.id = 1;
bind not suitable because of dinamic_where can contan different number of fields.
If you need to modify FROM part, I think you need to use a little more low level access to Sequelize.
There is a function queryGenerator.selectQuery however this takes string as FROM table name meaning if I do
selectQuery('(...SQL MAGIC)', options, Projects)
This will generate a query string as
SELECT ... FROM '(...SQL MAGIC)' ...
FROM query is taken as a string value which is not a valid SQL.
So, a little hacky workaround.
const customQuery = selectQuery('FROM_TO_BE_REPLACED', options, Projects)
// Use JS string replace to add dynamic SQL for FROM.
// If it is Postgres, double quotes are added by queryGenerator.
// If MySQL, it would be ``
customQuery.replace('"FROM_TO_BE_REPLACED"', '(...SQL MAGIC)')
All in action.
const Model = require("sequelize/lib/model");
const parentOptions = {
where: {
id: 1,
key: 'value'
},
include: [Organization]
};
// This is required when the inline query has `include` options, this 1 line make sure to serialize the query correctly.
Model._validateIncludedElements.bind(Projects)(parentOptions);
const customQuery = sequelize.getQueryInterface()
.queryGenerator
.selectQuery('FROM_TO_BE_REPLACED', parentOptions, Projects);
const fromQuery = '(SELECT * FROM SQL MAGIC)';
const projects = await sequelize.query(customQuery.replace('"FROM_TO_BE_REPLACED"', fromQuery),
{
type: QueryTypes.SELECT
}
);
Related
I am working with a PostgreSQL database using Prisma. I have a bulk update command which I want to fail if any of the records have changed since my last read.
My schema:
model OrderItem {
id String #id #default(uuid()) #db.Uuid
quantity Int
lastUpdated DateTime #updatedAt #map("last_updated")
##map("order_item")
}
I have written a query which works, but I built the query manually rather than using Prisma's safe query builder tools.
My query:
type OrderItemType = {
id: string;
quantity: number;
lastUpdated: Date;
}
type OrderItemUpdateDataType = {
quantity: number;
}
const updateByIds = async (
orderItemIdLastUpdatedTuples: ([OrderItemType['id'], OrderItemType['lastUpdated']])[],
orderItemUpdateData: OrderItemUpdateDataType,
) => {
// Optimistic concurrency - try updating based on last known "last updated" state. If mismatch, fail.
await prisma.$transaction(async (prisma) => {
// TODO: Prefer prisma.$queryRaw. Prisma.join() works on id[], but not on [id, lastUpdated][]
const idLastUpdatedPairs = orderItemIdLastUpdatedTuples
.map(([id, lastUpdated]) => `(uuid('${id}'), '${lastUpdated.toISOString()}')`)
.join(', ');
const query = `SELECT * FROM order_item WHERE (id, last_updated) in ( ${idLastUpdatedPairs} )`;
const items = await prisma.$queryRawUnsafe<OrderItem[]>(query);
// If query doesn't match expected update count then another query has outraced and updated since last read.
const itemIds = orderItemIdLastUpdatedTuples.map(([id]) => id);
if (items.length !== orderItemIdLastUpdatedTuples.length) {
throw new ConcurrentUpdateError(`Order Items ${itemIds.join(', ')} were stale. Failed to update.`);
}
await prisma.orderItem.updateMany({
where: { id: { in: itemIds } },
data: orderItemUpdateData,
});
});
};
This function wants to update a set of items. It accepts a list of tuples - id/lastUpdated pairs. It starts an explicit transaction, then performs an unsafe SELECT query to confirm the items to affect haven't been updated, then updates. This is following the guidance of Prisma's docs here - https://www.prisma.io/docs/concepts/components/prisma-client/transactions#interactive-transactions-in-preview
I was hoping to achieve the same results using prisma.$queryRaw rather than prisma.$queryRawUnsafe or even using implicit transactions rather than an explicit transaction wrapper. I wasn't able to find a syntax for expressing "where in tuple" using either of these approaches, though.
I am able to express what I want using implicit transactions when updating a single record. An example here would look like:
const { count } = await prisma.orderItem.updateMany({
where: { id, lastUpdated },
data: orderItemUpdateData,
});
and when using an explicit, safe query I stumbled on joining the array of tuples properly.
From the Prisma documentation, https://www.prisma.io/docs/concepts/components/prisma-client/raw-database-access#tagged-template-helpers, there exists a Prisma.join command (which happens implicitly when using their tagged template helper syntax) but I wasn't able to generate a valid output when feeding it an array of tuples.
Did I miss anything? Does Prisma support joining a tuple using their safe query template syntax?
I have the following Schema with a array of ObjectIds:
const userSchema = new Schema({
...
article: [{
type: mongoose.Schema.Types.ObjectId,
}],
...
},
I will count the array elements in the example above the result should be 10.
I have tried the following but this doesn't worked for me. The req.query.id is the _id from the user and will filter the specific user with the matching article array.
const userData = User.aggregate(
[
{
$match: {_id: id}
},
{
$project: {article: {$size: '$article'}}
},
]
)
console.log(res.json(userData));
The console.log(article.length) give me currently 0. How can I do this? Is the aggregate function the right choice or is a other way better to count elements of a array?
Not sure why to use aggregate when array of ids is already with user object.
Define articles field as reference:
const {Schema} = mongoose.Schema;
const {Types} = Schema;
const userSchema = new Schema({
...
article: {
type: [Types.ObjectId],
ref: 'Article',
index: true,
},
...
});
// add virtual if You want
userSchema.virtual('articleCount').get(function () {
return this.article.length;
});
and get them using populate:
const user = await User.findById(req.query.id).populate('articles');
console.log(user.article.length);
or simply have array of ids:
const user = await User.findById(req.query.id);
console.log(user.article.length);
make use of virtual field:
const user = await User.findById(req.query.id);
console.log(user.articleCount);
P.S. I use aggregate when I need to do complex post filter logic which in fact is aggregation. Think about it like You have resultset, but You want process resultset on db side to have more specific information which would be ineffective if You would do queries to db inside loop. Like if I need to get users which added specific article by specific day and partition them by hour.
I want to implement an UPDATE SET statement with named parameters? Is it possible?
For an object like:
{
_id: 1,
name: "new_name",
password: "new_password",
subscribed: true,
email: "new_email#email.com"
}
This is my guess:
UPDATE
accounts
SET
$(this:name) = $(this:csv)
WHERE
_id = $(this._id)
The object fields may vary depending on requests sent, so I don't want to "hard code" the parameters in.
The only way to automate a SET operation within pg-promise, is if you start using the multi-row update approach, with ColumnSet-s.
// create column set statically, once:
const cs = new pgp.helpers.ColumnSet(['?_id', 'name', 'password', 'subscribed', 'email'],
{table: 'accounts'});
When generating the update query...
const where = pgp.as.format('WHERE _id = $1', [_id]);
const update = `${pgp.helpers.update(data, cs)} ${where}`;
Executing the query:
await db.none(update);
I've got a problem that I've been stuck on, to no avail - seemingly similar in nature to Where condition for joined table in Sequelize ORM, except that I'd like to query on a previous join. Perhaps code will explain my problem. Happy to provide any extra info.
Models:
A.hasMany(B);
B.belongsTo(A);
B.hasMany(C);
C.belongsTo(B);
This is what I'd like to be able to achieve with Sequelize:
SELECT *
FROM `A`AS `A`
LEFT OUTER JOIN `B` AS `B` ON `A`.`id` = `B`.`a_id`
LEFT OUTER JOIN `C` AS `B->C` ON `B`.`id` = `B->C`.`b_id`
AND (`B`.`b_columnName` = `B->C`.`c_columnName`);
This is how I imagine this working: (instead it will create a raw query (2 raw queries, for A-B/C) with AND ( `C`.`columnName` = '$B.columnName$')) on the join (second arg is a string). Have tried sequelize.col, sequelize.where(sequelize.col..., etc..)
A.findOne({
where: { id: myId },
include: [{
model: B,
include: [{
model: C,
where: { $C.c_columnName$: $B.b_columnName$ }
}]
}]
});
Use the Op.col query operator to find columns that match other columns in your query. If you are only joining a single table you can pass an object instead of an array to make it more concise.
const Op = Sequelize.Op;
const result = await A.findOne({
include: {
model: B,
include: {
model: C,
where: {
c_columnName: {
[Op.col]: 'B.b_columnName',
},
}
},
},
});
i am using the angular-fullstack yeoman generator. created a schema for a Product, and a set of api crud operations. all works well. now in the get list operations, i don't want to receive all the fields, only a subset. like a select in sql. i would also wish to alter one value. instead of the price, i need price * 1.1 .
how to do that?
here is the code for the index method (returns list of products):
// Gets a list of Products
export function index(req, res) {
Product.findAsync()
.then(respondWithResult(res))
.catch(handleError(res));
}
function respondWithResult(res, statusCode) {
statusCode = statusCode || 200;
return function(entity) {
if (entity) {
res.status(statusCode).json(entity);
}
};
}
As stated in the documentation, .find() takes two params, query and projection.
// params
Product.findAsync(query, projection)
You can use projection to "select" a subset of fields;
// example
Product.findAsync({}, { _id: 1, name: 1, description: 1 })
// result, only the three specified field will be returned
[
{ _id: 'abc123', name: 'Some name', description: 'Some description'},
{...}
]
If you want to manipulate data I think you have to use the aggregation pipeline