Get author user ID using Parse.Query not current user - javascript

I have two parse classes, User and Place.
If user ads a place, user is added as Pointer to the Place user column.
In order to list all places and determine how many places has a user, i use the following query:
loadTotalPointsDetail(params: any = {}): Promise<Place[]> {
const page = params.page || 0;
const limit = params.limit || 100;
const query = new Parse.Query(Place);
query.equalTo('user', Parse.User.current());
query.skip(page * limit);
query.limit(limit);
query.include('category');
query.include('user');
query.doesNotExist('deletedAt');
return query.find();
}
Filtering by Parse.User.current()) i will get current user places.
If i don't filter by Parse.User.current()) it will return all places as objects, containing all data.
How can i filter by place real author / user? not current (loggedIn)?

loadTotalPointsDetail(params: any = {}): Promise<Place[]> {
const page = params.page || 0;
const limit = params.limit || 100;
const query = new Parse.Query(Place);
const user = new Parse.User();
user.id = 'The id of the user that you want to search for';
query.equalTo('user', user);
query.skip(page * limit);
query.limit(limit);
query.include('category');
query.include('user');
query.doesNotExist('deletedAt');
return query.find();
}

I'll post the solution here, not the most indicated but it works for me:
async loadData() {
try {
const places = await this.placeService.loadTotalPointsDetail(this.params);
const placeU = await this.placeService.loadPlaceU(this.getParams().id);
for (const place of places) {
this.places.push(place);
}
let u = placeU.user.id;
let totalUserPlaces = places.filter(x => x.user.id == u);
if (totalUserPlaces) {
/* The total returned by reduce() will be stored in myTotal */
const myTotal = totalUserPlaces.reduce((total, place) => {
/* For each place iterated, access the points field of the
current place being iterated and add that to the current
running total */
return total + place.points;
}, 0); /* Total is initally zero */
this.points = myTotal;
} else {}
} catch (err) {
const message = await this.getTrans('ERROR_NETWORK');
this.showToast(message);}}
so i'm loading two separate queries:
const places = await this.placeService.loadTotalPointsDetail(this.params); -> Gets all posts listings
const placeU = await this.placeService.loadPlaceU(this.getParams().id); --> Gets the ID for current post
I extract the user post ID:
let u = placeU.user.id;
I filter using that user in order to get his posts:
let totalUserPlaces = places.filter(x => x.user.id == u);

Related

Execute promise or await with generated string variable

I am building a mongoose query and storing it in a variable call query. The code below shows it
let query = "Product.find(match)";
if (requestObject.query.sortBy) {
query = query.concat(".", "sort(sort)");
const parts = requestObject.query.sortBy.split(":");
sort[parts[0]] = parts[1] === "desc" ? -1 : 1;
}
if (requestObject.query.fields) {
query = query.concat(".", "select(fields)");
const fields = requestObject.query.fields.split(",").join(" ");
const items = await Product.find(match).sort(sort).select(fields); //.populate("category").exec();
/**const items = await Product.find(match).sort(sort).select("-__v"); //.populate("category").exec();**/
}
I am facing an issue when attempting to run a mongoose query that I have generated and stored in a string. When I run it in post man, the response is 200 but no data is returned. Below is a console.log(query) on line 2
what I hope to achieve is to have await or create a new promise execute the content id query variable like shown below
const items = new Promise((resolve) => resolve(query)); //.populate("category").exec();
items
? responseObject.status(200).json(items)
: responseObject
.status(400)
.json({ message: "Could not find products, please try again" });
I will appreciate it very much that and also if you can give me a better way of doing it, I will love that
This doesn't really make sense. You are building a string, not a query. You can't do anything with that string. (You could eval it, but you really shouldn't). Instead, build a query object!
let query = Product.find(match);
if (requestObject.query.sortBy) {
const [field, dir] = requestObject.query.sortBy.split(":");
const sort = {};
sort[field] = dir === "desc" ? -1 : 1;
query = query.sort(sort);
}
if (requestObject.query.fields) {
const fields = requestObject.query.fields.split(",");
query = query.select(fields);
}
//query.populate("category")
const items = await query.exec();
if (items) {
responseObject.status(200).json(items)
} else {
responseObject.status(400).json({ message: "Could not find products, please try again" });
}
If you really want to get that string for something (e.g. debugging), build it separately from the query:
let query = Product.find(match);
let queryStr = 'Product.find(match)';
if (requestObject.query.sortBy) {
const [field, dir] = requestObject.query.sortBy.split(":");
const sort = {[field]: dir === "desc" ? -1 : 1};
query = query.sort(sort);
queryStr += `.sort(${JSON.stringify(sort)})`;
}
if (requestObject.query.fields) {
const fields = requestObject.query.fields.split(",");
query = query.select(fields);
queryStr += `.select(${JSON.stringify(fields)})`;
}
//query.populate("category")
//queryStr += `.populate("category")`;
console.log(queryStr);
const items = await query.exec();
…

How do I query a firebase collection by the 'in' paramater with more than 10 values?

I have this function, getFollowingPieces() , where I get posts with userIds that the logged in user follows. Although now I've found it won't work if they follow more than 10 people. The only solution I've seen tells people to use a get request for each userId, but that doesn't allow for pagination or proper order of data. (Notice I'm using an after value and an orderBy) I'd hate to have to move off firebase for this limitation but the following users is a big aspect of this app. Here's my function below:
export async function getFollowingPieces(userId, following, filter, after) {
const order = filter === "Popular" ? "likeCount" : "dateCreated";
const result = await firebase
.firestore()
.collection("pieces")
.where("userId", "in", following.slice(0, 10))
.where("published", "==", true)
.orderBy(order, "desc")
.limit(10)
.get();
const last = result.docs[result.docs.length - 1];
const userFollowedPieces = result.docs.map((piece) => ({
...piece.data(),
docId: piece.id,
}));
const piecesWithUserDetails = await Promise.all(
userFollowedPieces.map(async (piece) => {
let userLikedPiece = false;
let userBookmarkedPiece = false;
if (piece.likes.includes(userId)) {
userLikedPiece = true;
}
if (piece.bookmarks.includes(userId)) {
userBookmarkedPiece = true;
}
const user = await getUserByUserId(piece.userId);
const { username, picture, fullName } = user[0];
return {
username,
picture,
fullName,
...piece,
userLikedPiece,
userBookmarkedPiece,
};
})
);
return { piecesWithUserDetails, last };
}
With the way Cloud Firestore indexes work, these limits are in place to discourage inefficient querying. In your use case, there are two ways that come to mind that could perform the desired query.
In either case, you will see performance benefits in bandwidth and latency by executing them using a Callable Cloud Function.
Of the two approaches, I recommend the first, as it is guaranteed to return relevant results if they exist.
Approach 1: Chunked requests
In this approach, you split the following array into blocks of 10 authors, make requests for each block and zip the sorted results together.
Max. documents retrieved:
(Math.ceil(AUTHOR_COUNT / 10) * 10) documents
Pros:
Works well for small number of followed authorsUses index-based queryingWill return related documents regardless of age/popularity
Cons:
Wasteful document requests with large followed authors size
/**
* An intermediate state of processing a `PieceData` object used for sorting.
* #typedef PieceMetadata
*
* #property {String} docId - the piece's document ID
* #property {QueryDataSnapshot} snapshot - the piece's document snapshot
* #property {Number | undefined} likeCount - the piece's value for `likeCount`
* #property {Number | undefined} dateCreated - the piece's value for `dateCreated`
*/
/**
* Returns the `pageSize` most recent pieces (as `PieceMetadata`
* objects) sorted according to the given `order` field for the
* given array of `authors`.
*
* `authors` must have less than 10 entries or you must use
* `_getPublishedPiecesByBatchOfAuthors()` instead.
*
* Optionally, `after` can be provided as a `DataSnapshot` or
* appropriate value for `order` to support paginated results.
*/
function _getPublishedPiecesByAuthors(authors, order, pageSize, after = undefined) {
const query = firebase
.firestore()
.collection("pieces")
.where("userId", "in", authors)
.where("published", "==", true)
.orderBy(order, "desc")
.limit(pageSize);
const result = (typeof after !== "undefined" ? query.startAfter(after) : query)
.get();
const pieceMetadataArr = [];
result.forEach((piece) => {
pieceMetadataArr.push({
docId: piece.id, // document ID
[filter]: piece.get(filter), // likeCount/dateCreated as appropriate
snapshot: piece // unprocessed snapshot
});
});
return pieceMetadataArr;
}
/** splits array `arr` into chunks of max size `n` */
function chunkArr(arr, n) {
if (n <= 0) throw new Error("n must be greater than 0");
return Array
.from({length: Math.ceil(arr.length/n)})
.map((_, i) => arr.slice(n*i, n*(i+1)))
}
/**
* Returns the `pageSize` most recent pieces (as `PieceMetadata`
* objects) sorted according to the given `order` field for the
* given array of `authors`.
*
* `authors` may have as many entries as desired.
*
* Optionally, `after` can be provided as a `DataSnapshot` or
* appropriate value for `order` to support paginated results.
*/
async function _getPublishedPiecesByBatchOfAuthors(authors, order, pageSize, after = undefined) {
return Promise.all(
chunkArr(authors, 10)
.map(authorsInChunk => _getPublishedPiecesByAuthors(authorsInChunk, order, pageSize, after))
)
.then(resultBatches => {
return resultBatches.flat()
.sort((a,b) => a[order] - b[order]) // works if dateCreated is numeric
.slice(0, pageSize); // only return first X results
})
}
export async function getFollowingPieces(userId, following, filter, after = undefined) {
const order = filter === "Popular" ? "likeCount" : "dateCreated";
const sortedPieceMetadata = await _getPublishedPiecesByBatchOfAuthors(following, order, 10, after);
const userFollowedPieces = sortedPieceMetadata // "hydrate" the pieces from their snapshot object
.map(({ docId, snapshot }) => ({
...snapshot.data(),
docId
}));
const lastSnapshot = sortedPieces.length > 0
? sortedPieces[sortedPieces.length-1].snapshot
: undefined;
const piecesWithUserDetails = await Promise.all(
userFollowedPieces.map(/* ... */)
);
return { piecesWithUserDetails, lastSnapshot };
}
Approach 2: Iterate /pieces
In this variation, you search each document in /pieces that matches the base query, and pick out those that match the user's followed authors list.
Max. documents retrieved:
maxSearchCount documents
Pros:
Works well for large number of active/popular followed authors
Cons:
Wasteful document requests with small followed authors sizeWasteful document requests with inactive/unpopular authorsMay hit query limit before getting desired number of documents for inactive/unpopular authorsRequires client-based/function-based filtering
async function findFromQuery(query, predicate, count, pageSize, maxSearchCount, after = undefined) {
if (!query || !("orderBy" in query))
throw new TypeError("query must be a Firestore Query or CollectionReference");
if (typeof predicate !== "function")
throw new TypeError("predicate must be a function, was " + typeof predicate);
if (typeof count !== "number")
throw new TypeError("count must be a number, was " + typeof count);
if (typeof pageSize !== "number")
throw new TypeError("pageSize must be a number, was " + typeof pageSize);
if (typeof maxSearchCount !== "number")
throw new TypeError("maxSearchCount must be a number, was " + typeof maxSearchCount);
const pageQuery = query.limit(pageSize);
const baseQuery = after
? pageQuery.startAfter(after)
: pageQuery;
const docs = [];
let searched = 0, lastSnapshot = undefined;
while (docs.length < count && searched < maxSearchCount) {
const querySnapshot = await (lastSnapshot
? pageQuery.startAfter(lastSnapshot).get()
: baseQuery.get();
querySnapshot.forEach((doc) => {
if (predicate(doc)) {
docs.push(doc);
}
lastSnapshot = doc;
});
searched += querySnapshot.size();
}
// may have more than `count` results, so only return that many
return docs.slice(0, count);
}
export async function getFollowingPieces(userId, following, filter, after = undefined) {
const order = filter === "Popular" ? "likeCount" : "dateCreated";
const query = firebase
.firestore()
.collection("pieces")
.orderBy(order, "desc");
const foundPieceSnapshots = findFromQuery(query, (doc) => {
const author = doc.get("userId");
return following.includes(author);
}, 10, 10, 1000, after);
const userFollowedPieces = foundPieceSnapshots.map(piece => ({
...piece.data(),
docId: piece.id
}));
const lastSnapshot = foundPieceSnapshots[foundPieceSnapshots.length-1];
const piecesWithUserDetails = await Promise.all(
userFollowedPieces.map(/* ... */)
);
return { piecesWithUserDetails, lastSnapshot };
}

Firebase cloud function onUpdate is triggered but doesn't execute as wanted

I'm currently making a web application with React front-end and Firebase back-end. It is an application for a local gym and consists of two parts:
A client application for people who train at the local gym
A trainer application for trainers of the local gym
The local gym offers programs for companies. So a company takes out a subscription, and employees from the company can train at the local gym and use the client application. It is important that the individual progress of the company employees is being tracked as well as the entire progress (total number of kilograms lost by all the employees of company x together).
In the Firestore collection 'users' every user document has the field bodyweight. Whenever a trainer fills in a progress form after a physical assessment for a specific client, the bodyweight field in the user document of the client gets updated to the new bodyweight.
In Firestore there is another collection 'companies' where every company has a document. My goal is to put the total amount of kilograms lost by the employees of the company in that specific document. So every time a trainer updates the weight of an employee, the company document needs to be updated. I've made a cloud function that listens to updates of a user's document. The function is listed down below:
exports.updateCompanyProgress = functions.firestore
.document("users/{userID}")
.onUpdate((change, context) => {
const previousData = change.before.data();
const data = change.after.data();
if (previousData === data) {
return null;
}
const companyRef = admin.firestore.doc(`/companies/${data.company}`);
const newWeight = data.bodyweight;
const oldWeight = previousData.bodyweight;
const lostWeight = oldWeight > newWeight;
const difference = diff(newWeight, oldWeight);
const currentWeightLost = companyRef.data().weightLostByAllEmployees;
if (!newWeight || difference === 0 || !oldWeight) {
return null;
} else {
const newCompanyWeightLoss = calcNewCWL(
currentWeightLost,
difference,
lostWeight
);
companyRef.update({ weightLostByAllEmployees: newCompanyWeightLoss });
}
});
There are two simple functions in the cloud function above:
const diff = (a, b) => (a > b ? a - b : b - a);
const calcNewCWL = (currentWeightLost, difference, lostWeight) => {
if (!lostWeight) {
return currentWeightLost - difference;
}
return currentWeightLost + difference;
};
I've deployed the cloud function to Firebase to test it, but I can't get it to work. The function triggers whenever the user document is updated, but it doesn't update the company document with the new weightLostByAllEmployees value. It is the first time for me using Firebase cloud functions, so big change it is some sort of rookie mistake.
Your current solution has some bugs in it that we can squash.
Always false equality check
You use the following equality check to determine if the data has not changed:
if (previousData === data) {
return null;
}
This will always be false as the objects returned by change.before.data() and change.after.data() will always be different instances, even if they contain the same data.
Company changes are never handled
While this could be a rare, maybe impossible event, if a user's company was changed, you should remove their weight from the total of the original company and add it to the new company.
In a similar vein, when a employee leaves a company or deletes their account, you should remove their weight from the total in a onDelete handler.
Handling floating-point sums
In case you didn't know, floating point arithmetic has some minor quirks. Take for example the sum, 0.1 + 0.2, to a human, the answer is 0.3, but to JavaScript and many languages, the answer is 0.30000000000000004. See this question & thread for more information.
Rather than store your weight in the database as a floating point number, consider storing it as an integer. As weight is often not a whole number (e.g. 9.81kg), you should store this value multiplied by 100 (for 2 significant figures) and then round it to the nearest integer. Then when you display it, you either divide it by 100 or splice in the appropriate decimal symbol.
const v = 1201;
console.log(v/100); // -> 12.01
const vString = String(v);
console.log(vString.slice(0,-2) + "." + vString.slice(-2) + "kg"); // -> "12.01kg"
So for the sum, 0.1 + 0.2, you would scale it up to 10 + 20, with a result of 30.
console.log(0.1 + 0.2); // -> 0.30000000000000004
console.log((0.1*100 + 0.2*100)/100); // -> 0.3
But this strategy on its own isn't bullet proof because some multiplications still end up with these errors, like 0.14*100 = 14.000000000000002 and 0.29*100 = 28.999999999999996. To weed these out, we round the multiplied value.
console.log(0.01 + 0.14); // -> 0.15000000000000002
console.log((0.01*100 + 0.14*100)/100); // -> 0.15000000000000002
console.log((Math.round(0.01*100) + Math.round(0.14*100))/100) // -> 0.15
You can compare these using:
const arr = Array.from({length: 100}).map((_,i)=>i/100);
console.table(arr.map((a) => arr.map((b) => a + b)));
console.table(arr.map((a) => arr.map((b) => (a*100 + b*100)/100)));
console.table(arr.map((a) => arr.map((b) => (Math.round(a*100) + Math.round(b*100))/100)));
Therefore we can end up with these helper functions:
function sumFloats(a,b) {
return (Math.round(a * 100) + Math.round(b * 100)) / 100;
}
function sumFloatsForStorage(a,b) {
return (Math.round(a * 100) + Math.round(b * 100));
}
The main benefit of handling the weights this way is that you can now use FieldValue#increment() instead of a full blown transaction to shortcut updating the value. In the rare case that two users from the same company have an update collision, you can either retry the increment or fall back to the full transaction.
Inefficient data parsing
In your current code, you make use of .data() on the before and after states to get the data you need for your function. However, because you are pulling the user's entire document, you end up parsing all the fields in the document instead of just what you need - the bodyweight and company fields. You can do this using DocumentSnapshot#get(fieldName).
const afterData = change.after.data(); // parses everything - username, email, etc.
const { bodyweight, company } = afterData;
in comparison to:
const bodyweight = change.after.get("bodyweight"); // parses only "bodyweight"
const company = change.after.get("company"); // parses only "company"
Redundant math
For some reason you are calculating an absolute value of the difference between the weights, storing the sign of difference as a boolean and then using them together to apply the change back to the total weight lost.
The following lines:
const previousData = change.before.data();
const data = change.after.data();
const newWeight = data.bodyweight;
const oldWeight = previousData.bodyweight;
const lostWeight = oldWeight > newWeight;
const difference = diff(newWeight, oldWeight);
const currentWeightLost = companyRef.data().weightLostByAllEmployees;
const calcNewCWL = (currentWeightLost, difference, lostWeight) => {
if (!lostWeight) {
return currentWeightLost - difference;
}
return currentWeightLost + difference;
};
const newWeightLost = calcNewCWL(currentWeightLost, difference, lostWeight);
could be replaced with just:
const newWeight = change.after.get("bodyweight");
const oldWeight = change.before.get("bodyweight");
const deltaWeight = newWeight - oldWeight;
const currentWeightLost = companyRef.get("weightLostByAllEmployees") || 0;
const newWeightLost = currentWeightLost + deltaWeight;
Rolling it all together
exports.updateCompanyProgress = functions.firestore
.document("users/{userID}")
.onUpdate(async (change, context) => {
// "bodyweight" is the weight scaled up by 100
// i.e. "9.81kg" is stored as 981
const oldHundWeight = change.before.get("bodyweight") || 0;
const newHundWeight = change.after.get("bodyweight") || 0;
const oldCompany = change.before.get("company");
const newCompany = change.after.get("company");
const db = admin.firestore();
if (oldCompany === newCompany) {
// company unchanged
const deltaHundWeight = newHundWeight - oldHundWeight;
if (deltaHundWeight === 0) {
return null; // no action needed
}
const companyRef = db.doc(`/companies/${newCompany}`);
await companyRef.update({
weightLostByAllEmployees: admin.firestore.FieldValue.increment(deltaHundWeight)
});
} else {
// company was changed
const batch = db.batch();
const oldCompanyRef = db.doc(`/companies/${oldCompany}`);
const newCompanyRef = db.doc(`/companies/${newCompany}`);
// remove weight from old company
batch.update(oldCompanyRef, {
weightLostByAllEmployees: admin.firestore.FieldValue.increment(-oldHundWeight)
});
// add weight to new company
batch.update(newCompanyRef, {
weightLostByAllEmployees: admin.firestore.FieldValue.increment(newHundWeight)
});
// apply changes
await db.batch();
}
});
With transaction fallbacks
In the rare case where you get a write collision, this variant falls back to a traditional transaction to reattempt the change.
/**
* Increments weightLostByAllEmployees in all documents atomically
* using a transaction.
*
* `arrayOfCompanyRefToDeltaWeightPairs` is an array of company-increment pairs.
*/
function transactionIncrementWeightLostByAllEmployees(db, arrayOfCompanyRefToDeltaWeightPairs) {
return db.runTransaction((transaction) => {
// get all needed documents, then add the update for each to the transaction
return Promise
.all(
arrayOfCompanyRefToDeltaWeightPairs
.map(([companyRef, deltaWeight]) => {
return transaction.get(companyRef)
.then((companyDocSnapshot) => [companyRef, deltaWeight, companyDocSnapshot])
})
)
.then((arrayOfRefWeightSnapshotGroups) => {
arrayOfRefWeightSnapshotGroups.forEach(([companyRef, deltaWeight, companyDocSnapshot]) => {
const currentValue = companyDocSnapshot.get("weightLostByAllEmployees") || 0;
transaction.update(companyRef, {
weightLostByAllEmployees: currentValue + deltaWeight
})
});
});
});
}
exports.updateCompanyProgress = functions.firestore
.document("users/{userID}")
.onUpdate(async (change, context) => {
// "bodyweight" is the weight scaled up by 100
// i.e. "9.81kg" is stored as 981
const oldHundWeight = change.before.get("bodyweight") || 0;
const newHundWeight = change.after.get("bodyweight") || 0;
const oldCompany = change.before.get("company");
const newCompany = change.after.get("company");
const db = admin.firestore();
if (oldCompany === newCompany) {
// company unchanged
const deltaHundWeight = newHundWeight - oldHundWeight;
if (deltaHundWeight === 0) {
return null; // no action needed
}
const companyRef = db.doc(`/companies/${newCompany}`);
await companyRef
.update({
weightLostByAllEmployees: admin.firestore.FieldValue.increment(deltaHundWeight)
})
.catch((error) => {
// if an unexpected error, just rethrow it
if (error.code !== "resource-exhausted")
throw error;
// encountered write conflict, fall back to transaction
return transactionIncrementWeightLostByAllEmployees(db, [
[companyRef, deltaHundWeight]
]);
});
} else {
// company was changed
const batch = db.batch();
const oldCompanyRef = db.doc(`/companies/${oldCompany}`);
const newCompanyRef = db.doc(`/companies/${newCompany}`);
// remove weight from old company
batch.update(oldCompanyRef, {
weightLostByAllEmployees: admin.firestore.FieldValue.increment(-oldHundWeight)
});
// add weight to new company
batch.update(newCompanyRef, {
weightLostByAllEmployees: admin.firestore.FieldValue.increment(newHundWeight)
});
// apply changes
await db.batch()
.catch((error) => {
// if an unexpected error, just rethrow it
if (error.code !== "resource-exhausted")
throw error;
// encountered write conflict, fall back to transaction
return transactionIncrementWeightLostByAllEmployees(db, [
[oldCompanyRef, -oldHundWeight],
[newCompanyRef, newHundWeight]
]);
});
}
});
There are several points to adapt in your Cloud Function:
Do admin.firestore() instead of admin.firestore
You cannot get the data of the Company document by doing companyRef.data(). You must call the asynchronous get() method.
Use a Transaction when updating the Company document and return the promise returned by this transaction (see here for more details on this key aspect).
So the following code should do the trick.
Note that since we use a Transaction, we actually don't implement the recommendation of the second bullet point above. We use transaction.get(companyRef) instead.
exports.updateCompanyProgress = functions.firestore
.document("users/{userID}")
.onUpdate((change, context) => {
const previousData = change.before.data();
const data = change.after.data();
if (previousData === data) {
return null;
}
// You should do admin.firestore() instead of admin.firestore
const companyRef = admin.firestore().doc(`/companies/${data.company}`);
const newWeight = data.bodyweight;
const oldWeight = previousData.bodyweight;
const lostWeight = oldWeight > newWeight;
const difference = diff(newWeight, oldWeight);
if (!newWeight || difference === 0 || !oldWeight) {
return null;
} else {
return admin.firestore().runTransaction((transaction) => {
return transaction.get(companyRef).then((compDoc) => {
if (!compDoc.exists) {
throw "Document does not exist!";
}
const currentWeightLost = compDoc.data().weightLostByAllEmployees;
const newCompanyWeightLoss = calcNewCWL(
currentWeightLost,
difference,
lostWeight
);
transaction.update(companyRef, { weightLostByAllEmployees: newCompanyWeightLoss });
});
})
}
});

Moongose or mongo skip for pagination return empty array? [duplicate]

I am writing a webapp with Node.js and mongoose. How can I paginate the results I get from a .find() call? I would like a functionality comparable to "LIMIT 50,100" in SQL.
I'm am very disappointed by the accepted answers in this question. This will not scale. If you read the fine print on cursor.skip( ):
The cursor.skip() method is often expensive because it requires the server to walk from the beginning of the collection or index to get the offset or skip position before beginning to return result. As offset (e.g. pageNumber above) increases, cursor.skip() will become slower and more CPU intensive. With larger collections, cursor.skip() may become IO bound.
To achieve pagination in a scaleable way combine a limit( ) along with at least one filter criterion, a createdOn date suits many purposes.
MyModel.find( { createdOn: { $lte: request.createdOnBefore } } )
.limit( 10 )
.sort( '-createdOn' )
After taking a closer look at the Mongoose API with the information provided by Rodolphe, I figured out this solution:
MyModel.find(query, fields, { skip: 10, limit: 5 }, function(err, results) { ... });
Pagination using mongoose, express and jade - Here's a link to my blog with more detail
var perPage = 10
, page = Math.max(0, req.params.page)
Event.find()
.select('name')
.limit(perPage)
.skip(perPage * page)
.sort({
name: 'asc'
})
.exec(function(err, events) {
Event.count().exec(function(err, count) {
res.render('events', {
events: events,
page: page,
pages: count / perPage
})
})
})
You can chain just like that:
var query = Model.find().sort('mykey', 1).skip(2).limit(5)
Execute the query using exec
query.exec(callback);
In this case, you can add the query page and/ or limit to your URL as a query string.
For example:
?page=0&limit=25 // this would be added onto your URL: http:localhost:5000?page=0&limit=25
Since it would be a String we need to convert it to a Number for our calculations. Let's do it using the parseInt method and let's also provide some default values.
const pageOptions = {
page: parseInt(req.query.page, 10) || 0,
limit: parseInt(req.query.limit, 10) || 10
}
sexyModel.find()
.skip(pageOptions.page * pageOptions.limit)
.limit(pageOptions.limit)
.exec(function (err, doc) {
if(err) { res.status(500).json(err); return; };
res.status(200).json(doc);
});
BTW
Pagination starts with 0
You can use a little package called Mongoose Paginate that makes it easier.
$ npm install mongoose-paginate
After in your routes or controller, just add :
/**
* querying for `all` {} items in `MyModel`
* paginating by second page, 10 items per page (10 results, page 2)
**/
MyModel.paginate({}, 2, 10, function(error, pageCount, paginatedResults) {
if (error) {
console.error(error);
} else {
console.log('Pages:', pageCount);
console.log(paginatedResults);
}
}
Query:
search = productName
Params:
page = 1
// Pagination
router.get("/search/:page", (req, res, next) => {
const resultsPerPage = 5;
let page = req.params.page >= 1 ? req.params.page : 1;
const query = req.query.search;
page = page - 1
Product.find({ name: query })
.select("name")
.sort({ name: "asc" })
.limit(resultsPerPage)
.skip(resultsPerPage * page)
.then((results) => {
return res.status(200).send(results);
})
.catch((err) => {
return res.status(500).send(err);
});
});
This is a example you can try this,
var _pageNumber = 2,
_pageSize = 50;
Student.count({},function(err,count){
Student.find({}, null, {
sort: {
Name: 1
}
}).skip(_pageNumber > 0 ? ((_pageNumber - 1) * _pageSize) : 0).limit(_pageSize).exec(function(err, docs) {
if (err)
res.json(err);
else
res.json({
"TotalCount": count,
"_Array": docs
});
});
});
Try using mongoose function for pagination. Limit is the number of records per page and number of the page.
var limit = parseInt(body.limit);
var skip = (parseInt(body.page)-1) * parseInt(limit);
db.Rankings.find({})
.sort('-id')
.limit(limit)
.skip(skip)
.exec(function(err,wins){
});
This is what I done it on code
var paginate = 20;
var page = pageNumber;
MySchema.find({}).sort('mykey', 1).skip((pageNumber-1)*paginate).limit(paginate)
.exec(function(err, result) {
// Write some stuff here
});
That is how I done it.
Simple and powerful pagination solution
async getNextDocs(no_of_docs_required: number = 5, last_doc_id?: string) {
let docs
if (!last_doc_id) {
// get first 5 docs
docs = await MySchema.find().sort({ _id: -1 }).limit(no_of_docs_required)
}
else {
// get next 5 docs according to that last document id
docs = await MySchema.find({_id: {$lt: last_doc_id}})
.sort({ _id: -1 }).limit(no_of_docs_required)
}
return docs
}
last_doc_id: the last document id that you get
no_of_docs_required: the number of docs that you want to fetch i.e. 5, 10, 50 etc.
If you don't provide the last_doc_id to the method, you'll get i.e. 5 latest docs
If you've provided the last_doc_id then you'll get the next i.e. 5 documents.
There are some good answers giving the solution that uses skip() & limit(), however, in some scenarios, we also need documents count to generate pagination. Here's what we do in our projects:
const PaginatePlugin = (schema, options) => {
options = options || {}
schema.query.paginate = async function(params) {
const pagination = {
limit: options.limit || 10,
page: 1,
count: 0
}
pagination.limit = parseInt(params.limit) || pagination.limit
const page = parseInt(params.page)
pagination.page = page > 0 ? page : pagination.page
const offset = (pagination.page - 1) * pagination.limit
const [data, count] = await Promise.all([
this.limit(pagination.limit).skip(offset),
this.model.countDocuments(this.getQuery())
]);
pagination.count = count;
return { data, pagination }
}
}
mySchema.plugin(PaginatePlugin, { limit: DEFAULT_LIMIT })
// using async/await
const { data, pagination } = await MyModel.find(...)
.populate(...)
.sort(...)
.paginate({ page: 1, limit: 10 })
// or using Promise
MyModel.find(...).paginate(req.query)
.then(({ data, pagination }) => {
})
.catch(err => {
})
Here is a version that I attach to all my models. It depends on underscore for convenience and async for performance. The opts allows for field selection and sorting using the mongoose syntax.
var _ = require('underscore');
var async = require('async');
function findPaginated(filter, opts, cb) {
var defaults = {skip : 0, limit : 10};
opts = _.extend({}, defaults, opts);
filter = _.extend({}, filter);
var cntQry = this.find(filter);
var qry = this.find(filter);
if (opts.sort) {
qry = qry.sort(opts.sort);
}
if (opts.fields) {
qry = qry.select(opts.fields);
}
qry = qry.limit(opts.limit).skip(opts.skip);
async.parallel(
[
function (cb) {
cntQry.count(cb);
},
function (cb) {
qry.exec(cb);
}
],
function (err, results) {
if (err) return cb(err);
var count = 0, ret = [];
_.each(results, function (r) {
if (typeof(r) == 'number') {
count = r;
} else if (typeof(r) != 'number') {
ret = r;
}
});
cb(null, {totalCount : count, results : ret});
}
);
return qry;
}
Attach it to your model schema.
MySchema.statics.findPaginated = findPaginated;
Above answer's holds good.
Just an add-on for anyone who is into async-await rather than
promise !!
const findAllFoo = async (req, resp, next) => {
const pageSize = 10;
const currentPage = 1;
try {
const foos = await FooModel.find() // find all documents
.skip(pageSize * (currentPage - 1)) // we will not retrieve all records, but will skip first 'n' records
.limit(pageSize); // will limit/restrict the number of records to display
const numberOfFoos = await FooModel.countDocuments(); // count the number of records for that model
resp.setHeader('max-records', numberOfFoos);
resp.status(200).json(foos);
} catch (err) {
resp.status(500).json({
message: err
});
}
};
you can use the following line of code as well
per_page = parseInt(req.query.per_page) || 10
page_no = parseInt(req.query.page_no) || 1
var pagination = {
limit: per_page ,
skip:per_page * (page_no - 1)
}
users = await User.find({<CONDITION>}).limit(pagination.limit).skip(pagination.skip).exec()
this code will work in latest version of mongo
A solid approach to implement this would be to pass the values from the frontend using a query string. Let's say we want to get page #2 and also limit the output to 25 results.
The query string would look like this: ?page=2&limit=25 // this would be added onto your URL: http:localhost:5000?page=2&limit=25
Let's see the code:
// We would receive the values with req.query.<<valueName>> => e.g. req.query.page
// Since it would be a String we need to convert it to a Number in order to do our
// necessary calculations. Let's do it using the parseInt() method and let's also provide some default values:
const page = parseInt(req.query.page, 10) || 1; // getting the 'page' value
const limit = parseInt(req.query.limit, 10) || 25; // getting the 'limit' value
const startIndex = (page - 1) * limit; // this is how we would calculate the start index aka the SKIP value
const endIndex = page * limit; // this is how we would calculate the end index
// We also need the 'total' and we can get it easily using the Mongoose built-in **countDocuments** method
const total = await <<modelName>>.countDocuments();
// skip() will return a certain number of results after a certain number of documents.
// limit() is used to specify the maximum number of results to be returned.
// Let's assume that both are set (if that's not the case, the default value will be used for)
query = query.skip(startIndex).limit(limit);
// Executing the query
const results = await query;
// Pagination result
// Let's now prepare an object for the frontend
const pagination = {};
// If the endIndex is smaller than the total number of documents, we have a next page
if (endIndex < total) {
pagination.next = {
page: page + 1,
limit
};
}
// If the startIndex is greater than 0, we have a previous page
if (startIndex > 0) {
pagination.prev = {
page: page - 1,
limit
};
}
// Implementing some final touches and making a successful response (Express.js)
const advancedResults = {
success: true,
count: results.length,
pagination,
data: results
}
// That's it. All we have to do now is send the `results` to the frontend.
res.status(200).json(advancedResults);
I would suggest implementing this logic into middleware so you can be able to use it for various routes/ controllers.
You can do using mongoose-paginate-v2. For more info click here
const mongoose = require('mongoose');
const mongoosePaginate = require('mongoose-paginate-v2');
const mySchema = new mongoose.Schema({
// your schema code
});
mySchema.plugin(mongoosePaginate);
const myModel = mongoose.model('SampleModel', mySchema);
myModel.paginate().then({}) // Usage
I have found a very efficient way and implemented it myself, I think this way is the best for the following reasons:
It does not use skip, which time complexity doesn't scale well;
It uses IDs to query the document. Ids are indexed by default in MongoDB, making them very fast to query;
It uses lean queries, these are known to be VERY performative, as they remove a lot of "magic" from Mongoose and returns a document that comes kind of "raw" from MongoDB;
It doesn't depend on any third party packages that might contain vulnerabilities or have vulnerable dependencies.
The only caveat to this is that some methods of Mongoose, such as .save() will not work well with lean queries, such methods are listed in this awesome blog post, I really recommend this series, because it considers a lot of aspects, such as type security (which prevents critical errors) and PUT/ PATCH.
I will provide some context, this is a Pokémon repository, the pagination works as the following: The API receives unsafeId from the req.body object of Express, we need to convert this to string in order to prevent NoSQL injections (it could be an object with evil filters), this unsafeId can be an empty string or the ID of the last item of the previous page, it goes like this:
/**
* #description GET All with pagination, will return 200 in success
* and receives the last ID of the previous page or undefined for the first page
* Note: You should take care, read and consider about Off-By-One error
* #param {string|undefined|unknown} unsafeId - An entire page that comes after this ID will be returned
*/
async readPages(unsafeId) {
try {
const id = String(unsafeId || '');
let criteria;
if (id) {
criteria = {_id: {$gt: id}};
} // else criteria is undefined
// This query looks a bit redundant on `lean`, I just really wanted to make sure it is lean
const pokemon = await PokemonSchema.find(
criteria || {},
).setOptions({lean: true}).limit(15).lean();
// This would throw on an empty page
// if (pokemon.length < 1) {
// throw new PokemonNotFound();
// }
return pokemon;
} catch (error) {
// In this implementation, any error that is not defined by us
// will not return on the API to prevent information disclosure.
// our errors have this property, that indicate
// that no sensitive information is contained within this object
if (error.returnErrorResponse) {
throw error;
} // else
console.error(error.message);
throw new InternalServerError();
}
}
Now, to consume this and avoid Off-By-One errors in the frontend, you do it like the following, considering that pokemons is the Array of Pokémons documents that are returned from the API:
// Page zero
const pokemons = await fetchWithPagination({'page': undefined});
// Page one
// You can also use a fixed number of pages instead of `pokemons.length`
// But `pokemon.length` is more reliable (and a bit slower)
// You will have trouble with the last page if you use it with a constant
// predefined number
const id = pokemons[pokemons.length - 1]._id;
if (!id) {
throw new Error('Last element from page zero has no ID');
} // else
const page2 = await fetchWithPagination({'page': id});
As a note here, Mongoose IDs are always sequential, this means that any newer ID will always be greater than the older one, that is the foundation of this answer.
This approach has been tested agaisnt Off-By-One errors, for instance, the last element of a page could be returned as the first element of the following one (duplicated), or an element that is between the last of the previous page and the first of the current page might disappear.
When you are done with all the pages and request a page after the last element (one that does not exist), the response will be an empty array with 200 (OK), which is awesome!
The easiest and more speedy way is, paginate with the objectId
Example;
Initial load condition
condition = {limit:12, type:""};
Take the first and last ObjectId from response data
Page next condition
condition = {limit:12, type:"next", firstId:"57762a4c875adce3c38c662d", lastId:"57762a4c875adce3c38c6615"};
Page next condition
condition = {limit:12, type:"next", firstId:"57762a4c875adce3c38c6645", lastId:"57762a4c875adce3c38c6675"};
In mongoose
var condition = {};
var sort = { _id: 1 };
if (req.body.type == "next") {
condition._id = { $gt: req.body.lastId };
} else if (req.body.type == "prev") {
sort = { _id: -1 };
condition._id = { $lt: req.body.firstId };
}
var query = Model.find(condition, {}, { sort: sort }).limit(req.body.limit);
query.exec(function(err, properties) {
return res.json({ "result": result);
});
The best approach (IMO) is to use skip and limit BUT within a limited collections or documents.
To make the query within limited documents, we can use specific index like index on a DATE type field. See that below
let page = ctx.request.body.page || 1
let size = ctx.request.body.size || 10
let DATE_FROM = ctx.request.body.date_from
let DATE_TO = ctx.request.body.date_to
var start = (parseInt(page) - 1) * parseInt(size)
let result = await Model.find({ created_at: { $lte: DATE_FROM, $gte: DATE_TO } })
.sort({ _id: -1 })
.select('<fields>')
.skip( start )
.limit( size )
.exec(callback)
Most easiest plugin for pagination.
https://www.npmjs.com/package/mongoose-paginate-v2
Add plugin to a schema and then use model paginate method:
var mongoose = require('mongoose');
var mongoosePaginate = require('mongoose-paginate-v2');
var mySchema = new mongoose.Schema({
/* your schema definition */
});
mySchema.plugin(mongoosePaginate);
var myModel = mongoose.model('SampleModel', mySchema);
myModel.paginate().then({}) // Usage
let page,limit,skip,lastPage, query;
page = req.params.page *1 || 1; //This is the page,fetch from the server
limit = req.params.limit * 1 || 1; // This is the limit ,it also fetch from the server
skip = (page - 1) * limit; // Number of skip document
lastPage = page * limit; //last index
counts = await userModel.countDocuments() //Number of document in the collection
query = query.skip(skip).limit(limit) //current page
const paginate = {}
//For previous page
if(skip > 0) {
paginate.prev = {
page: page - 1,
limit: limit
}
//For next page
if(lastPage < counts) {
paginate.next = {
page: page + 1,
limit: limit
}
results = await query //Here is the final results of the query.
const page = req.query.page * 1 || 1;
const limit = req.query.limit * 1 || 1000;
const skip = (page - 1) * limit;
query = query.skip(skip).limit(limit);
This is example function for getting the result of skills model with pagination and limit options
export function get_skills(req, res){
console.log('get_skills');
var page = req.body.page; // 1 or 2
var size = req.body.size; // 5 or 10 per page
var query = {};
if(page < 0 || page === 0)
{
result = {'status': 401,'message':'invalid page number,should start with 1'};
return res.json(result);
}
query.skip = size * (page - 1)
query.limit = size
Skills.count({},function(err1,tot_count){ //to get the total count of skills
if(err1)
{
res.json({
status: 401,
message:'something went wrong!',
err: err,
})
}
else
{
Skills.find({},{},query).sort({'name':1}).exec(function(err,skill_doc){
if(!err)
{
res.json({
status: 200,
message:'Skills list',
data: data,
tot_count: tot_count,
})
}
else
{
res.json({
status: 401,
message: 'something went wrong',
err: err
})
}
}) //Skills.find end
}
});//Skills.count end
}
Using ts-mongoose-pagination
const trainers = await Trainer.paginate(
{ user: req.userId },
{
perPage: 3,
page: 1,
select: '-password, -createdAt -updatedAt -__v',
sort: { createdAt: -1 },
}
)
return res.status(200).json(trainers)
Below Code Is Working Fine For Me.
You can add finding filters also and user same in countDocs query to get accurate results.
export const yourController = async (req, res) => {
const { body } = req;
var perPage = body.limit,
var page = Math.max(0, body.page);
yourModel
.find() // You Can Add Your Filters inside
.limit(perPage)
.skip(perPage * (page - 1))
.exec(function (err, dbRes) {
yourModel.count().exec(function (err, count) { // You Can Add Your Filters inside
res.send(
JSON.stringify({
Articles: dbRes,
page: page,
pages: count / perPage,
})
);
});
});
};
You can write query like this.
mySchema.find().skip((page-1)*per_page).limit(per_page).exec(function(err, articles) {
if (err) {
return res.status(400).send({
message: err
});
} else {
res.json(articles);
}
});
page : page number coming from client as request parameters.
per_page : no of results shown per page
If you are using MEAN stack following blog post provides much of the information to create pagination in front end using angular-UI bootstrap and using mongoose skip and limit methods in the backend.
see : https://techpituwa.wordpress.com/2015/06/06/mean-js-pagination-with-angular-ui-bootstrap/
You can either use skip() and limit(), but it's very inefficient. A better solution would be a sort on indexed field plus limit().
We at Wunderflats have published a small lib here: https://github.com/wunderflats/goosepage
It uses the first way.
If you are using mongoose as a source for a restful api have a look at
'restify-mongoose' and its queries. It has exactly this functionality built in.
Any query on a collection provides headers that are helpful here
test-01:~$ curl -s -D - localhost:3330/data?sort=-created -o /dev/null
HTTP/1.1 200 OK
link: </data?sort=-created&p=0>; rel="first", </data?sort=-created&p=1>; rel="next", </data?sort=-created&p=134715>; rel="last"
.....
Response-Time: 37
So basically you get a generic server with a relatively linear load time for queries to collections. That is awesome and something to look at if you want to go into a own implementation.
app.get("/:page",(req,res)=>{
post.find({}).then((data)=>{
let per_page = 5;
let num_page = Number(req.params.page);
let max_pages = Math.ceil(data.length/per_page);
if(num_page == 0 || num_page > max_pages){
res.render('404');
}else{
let starting = per_page*(num_page-1)
let ending = per_page+starting
res.render('posts', {posts:data.slice(starting,ending), pages: max_pages, current_page: num_page});
}
});
});

Limit number of records in firebase

Every minute I have a script that push a new record in my firebase database.
What i want is delete the last records when length of the list reach a fixed value.
I have been through the doc and other post and the thing I have found so far is something like that :
// Max number of lines of the chat history.
const MAX_ARDUINO = 10;
exports.arduinoResponseLength = functions.database.ref('/arduinoResponse/{res}').onWrite(event => {
const parentRef = event.data.ref.parent;
return parentRef.once('value').then(snapshot => {
if (snapshot.numChildren() >= MAX_ARDUINO) {
let childCount = 0;
let updates = {};
snapshot.forEach(function(child) {
if (++childCount <= snapshot.numChildren() - MAX_ARDUINO) {
updates[child.key] = null;
}
});
// Update the parent. This effectively removes the extra children.
return parentRef.update(updates);
}
});
});
The problem is : onWrite seems to download all the related data every time it is triggered.
This is a pretty good process when the list is not so long. But I have like 4000 records, and every month it seems that I screw up my firebase download quota with that.
Does anyone would know how to handle this kind of situation ?
Ok so at the end I came with 3 functions. One update the number of arduino records, one totally recount it if the counter is missing. The last one use the counter to make a query using the limitToFirst filter so it retrieve only the relevant data to remove.
It is actually a combination of those two example provided by Firebase :
https://github.com/firebase/functions-samples/tree/master/limit-children
https://github.com/firebase/functions-samples/tree/master/child-count
Here is my final result
const MAX_ARDUINO = 1500;
exports.deleteOldArduino = functions.database.ref('/arduinoResponse/{resId}/timestamp').onWrite(event => {
const collectionRef = event.data.ref.parent.parent;
const countRef = collectionRef.parent.child('arduinoResCount');
return countRef.once('value').then(snapCount => {
return collectionRef.limitToFirst(snapCount.val() - MAX_ARDUINO).transaction(snapshot => {
snapshot = null;
return snapshot;
})
});
});
exports.trackArduinoLength = functions.database.ref('/arduinoResponse/{resId}/timestamp').onWrite(event => {
const collectionRef = event.data.ref.parent.parent;
const countRef = collectionRef.parent.child('arduinoResCount');
// Return the promise from countRef.transaction() so our function
// waits for this async event to complete before it exits.
return countRef.transaction(current => {
if (event.data.exists() && !event.data.previous.exists()) {
return (current || 0) + 1;
} else if (!event.data.exists() && event.data.previous.exists()) {
return (current || 0) - 1;
}
}).then(() => {
console.log('Counter updated.');
});
});
exports.recountArduino = functions.database.ref('/arduinoResCount').onWrite(event => {
if (!event.data.exists()) {
const counterRef = event.data.ref;
const collectionRef = counterRef.parent.child('arduinoResponse');
// Return the promise from counterRef.set() so our function
// waits for this async event to complete before it exits.
return collectionRef.once('value')
.then(arduinoRes => counterRef.set(arduinoRes.numChildren()));
}
});
I have not tested it yet but soon I will post my result !
I also heard that one day Firebase will add a "size" query, that is definitely missing in my opinion.

Categories

Resources