Firebase: Run a query synchronously - javascript
I am trying to set some user data depending on the no.of users already in my USERS COLLECTION. This even includes a userId which should be a number.
exports.setUserData = functions.firestore.document('/users/{documentId}')
.onCreate(event => {
return admin.firestore().collection('users')
.orderBy('userId', 'desc').limit(1)
.get().then(function(snapshot) {
const user = snapshot.docs[0].data();
var lastUserId = user.userId;
var userObject = {
userId: lastUserId + 1,... some other fields here
};
event.data.ref.set(userObject, {
merge: true
});
});
});
One issue I noticed here, quickly adding 2 users result in those documents having the same userId may be because the get() query is asynchronous?
Is there a way to make this whole setUserData method synchronous?
There is no way to make Cloud Functions run your function invocations sequentially. That would also be quite contrary to the serverless promise of auto-scaling to demands.
But in your case there's a much simpler, lower level primitive to get a sequential ID. You should store the last known user ID in the database and then use a transaction to read/update it.
var counterRef = admin.firestore().collection('counters').doc('userid');
return db.runTransaction(function(transaction) {
// This code may get re-run multiple times if there are conflicts.
return transaction.get(counterRef).then(function(counterDoc) {
var newValue = (counterDoc.data() || 0) + 1;
transaction.update(counterRef, newValue);
});
});
Solution
var counterRef = admin.firestore().collection('counters').doc('userId');
return admin.firestore().runTransaction(function(transaction) {
// This code may get re-run multiple times if there are conflicts.
return transaction.get(counterRef).then(function(counterDoc) {
var newValue = (counterDoc.data().value || 0) + 1;
transaction.update(counterRef, {
"value": newValue
});
});
}).then(t => {
admin.firestore().runTransaction(function(transaction) {
// This code may get re-run multiple times if there are conflicts.
return transaction.get(counterRef).then(function(counterDoc) {
var userIdCounter = counterDoc.data().value || 0;
var userObject = {
userId: userIdCounter
};
event.data.ref.set(userObject, {
merge: true
});
});
})
});
Related
Trouble with Addition Assignment in Array from Firebase
I have a scenario where i need to query multiple collections at once and retrieve the values based on the collection name. I use Promise.all to do so and it works accordingly like so var dbPromises = []; dbPromises.push( admin.firestore().collection("collection1").where("user_id", "==", uid).get(), admin.firestore().collection("collection2").where("user_id", "==", uid).get(), admin.firestore().collection("collection3").where("user_id", "==", uid).get(), ); const promiseConst = await Promise.all(dbPromises); promiseConst.forEach((qs) => { if (qs.size > 0) { if (qs.query._queryOptions.collectionId == "collection1") { qs.docs.map((doc) => { valuesArr1.push(doc.data().arr); }); } else if (qs.query._queryOptions.collectionId == "Collection2") { qs.docs.map((doc) => { valuesArr2.push(doc.data()); }); } else if (qs.query._queryOptions.collectionId == "collection3") { qs.docs.map((doc) => { valuesArr3.push(doc.data()); }); } } else { return } }); for (var i=0; i < valuesArr1.length; i++) { if (valuesArr1[i].desiredData) { console.log('datas from for loop on datas array', valuesArr1[i].desiredData) globalVariable += `<img src="${valuesArr1[i].desiredData}">`; } } Once I do this I map the query snapshot I get and am able to retrieve the values up to this point like so From the first collection I retrieve an array from a firestore document and then the following collections i just retrieve all documents from the collections. This all 'works' in that when I console.log into the functions console the data shows up exactly as expected. It's only when I want to iterate over the data and assign the results to a global variable to use elsewhere that strange behavior occurs. The console.log shows the desired data in the functions console with no issues, but the output when I interpolate that data into the html and send it off in nodemailer I get the following result undefined is always the first in the response when i use the += addition assignment operator, but if i just use the = assignment operator there's no undefined but I obviously don't get all the data I'm expecting. There are no undefined values or documents in the collections that I'm retrieving, I've checked thoroughly and even deleted documents to make sure of it. After days of researching I've come to the conclusion it has to do with the asynchronous nature of the promise I'm working with and the data not being immediately ready when I iterate it. Can someone help me understand what I'm doing wrong and how to fix it in node?
I figured out a solution to my problem and would like to share it in hopes it saves a future viewer some time. Before, I was storing the results of the array from Firebase inside a global variable. To save some head scratching I'll post the code again below. var globalVariableArray = [] var globalVariable var dbPromises = []; dbPromises.push( admin.firestore().collection("DataCollection").where("user_id", "==", uid).get() ); const promiseConst = await Promise.all(dbPromises); promiseConst.forEach((qs) => { if (qs.size > 0) { if (qs.query._queryOptions.collectionId == "DataCollection") { Promise.all( qs.docs.map(doc => { globalVariableArray = doc.data().arrayWithDesiredData; }) ); } else { return } }); globalVariableArray.map(gv => { globalVariable += `<p>gv.desiredData</p>` // <--- Right here is where the problem area was }) var mailOptions = { from: foo#blurdybloop.com, to: 'bar#blurdybloop.com subject: 'Almost but not quite', html: `${globalVariable}` }; The above code give the expected output, but the output would always have undefined first before the data showed. This happened no matter how the array from Firebase was iterated over. After strengthening my Google-Fu, I worked out the following solution var globalVariableArray = [] var globalVariable var dbPromises = []; dbPromises.push( admin.firestore().collection("DataCollection").where("user_id", "==", uid).get() ); const promiseConst = await Promise.all(dbPromises); promiseConst.forEach((qs) => { if (qs.size > 0) { if (qs.query._queryOptions.collectionId == "DataCollection") { Promise.all( qs.docs.map(doc => { globalVariableArray = doc.data().arrayWithDesiredData; }) ); } else { return } }); var mailOptions = { from: foo#blurdybloop.com, to: 'bar#blurdybloop.com subject: 'It works!!', html: `${globalVariableArray.map(dataIWantedAllAlong => <p>dataIWantedAllAlong.desiredData</p> )}` <--- Here I simply loop through the array inside the interpolation blocks and voila! no more undefined showing up in the results }; I perform the loop inside the brackets where I interpolate the dynamic data and am no longer getting that pesky undefined showing up in my emails. Safe travels and happy coding to you all!
Wait for all Firebase data query requests before executing code
I am trying to fetch data from different collections in my cloud Firestore database in advance before I process them and apply them to batch, I created two async functions, one to capture the data and another to execute certain code only after all data is collected, I didn't want the code executing and creating errors before the data is fetched when i try to access the matchesObject after the async function to collect data is finished, it keeps saying "it cannot access a property matchStatus of undefined", i thought took care of that with async and await? could anyone shed some light as to why it is undefined one moment axios.request(options).then(function(response) { console.log('Total matches count :' + response.data.matches.length); const data = response.data; var matchesSnapshot; var marketsSnapshot; var tradesSnapshot; var betsSnapshot; matchesObject = {}; marketsObject = {}; tradesObject = {}; betsObject = {}; start(); async function checkDatabase() { matchesSnapshot = await db.collection('matches').get(); matchesSnapshot.forEach(doc => { matchesObject[doc.id] = doc.data(); console.log('matches object: ' + doc.id.toString()) }); marketsSnapshot = await db.collection('markets').get(); marketsSnapshot.forEach(doc2 => { marketsObject[doc2.id] = doc2.data(); console.log('markets object: ' + doc2.id.toString()) }); tradesSnapshot = await db.collection('trades').get(); tradesSnapshot.forEach(doc3 => { tradesObject[doc3.id] = doc3.data(); console.log('trades object: ' + doc3.id.toString()) }); betsSnapshot = await db.collection('bets').get(); betsSnapshot.forEach(doc4 => { betsObject[doc4.id] = doc4.data(); console.log('bets object: ' + doc4.id.toString()) }); } async function start() { await checkDatabase(); // this is the part which is undefined, it keeps saying it cant access property matchStatus of undefined console.log('here is matches object ' + matchesObject['302283']['matchStatus']); if (Object.keys(matchesObject).length != 0) { for (let bets of Object.keys(betsObject)) { if (matchesObject[betsObject[bets]['tradeMatchId']]['matchStatus'] == 'IN_PLAY' && betsObject[bets]['matched'] == false) { var sfRef = db.collection('users').doc(betsObject[bets]['user']); batch11.set(sfRef, { accountBalance: admin.firestore.FieldValue + parseFloat(betsObject[bets]['stake']), }, { merge: true }); var sfRef = db.collection('bets').doc(bets); batch12.set(sfRef, { tradeCancelled: true, }, { merge: true }); } } } });
There are too many smaller issues in the current code to try to debug them one-by-one, so this refactor introduces various tests against your data. It currently won't make any changes to your database and is meant to be a replacement for your start() function. One of the main differences against your current code is that it doesn't unnecessarily download 4 collections worth of documents (two of them aren't even used in the code you've included). Steps First, it will get all the bet documents that have matched == false. From these documents, it will check if they have any syntax errors and report them to the console. For each valid bet document, the ID of it's linked match document will be grabbed so we can then fetch all the match documents we actually need. Then we queue up the changes to the user's balance and the bet's document. Finally we report about any changes to be done and commit them (once you uncomment the line). Code Note: fetchDocumentById() is defined in this gist. Its a helper function to allow someCollectionRef.where(FieldPath.documentId(), 'in', arrayOfIds) to take more than 10 IDs at once. async function applyBalanceChanges() { const betsCollectionRef = db.collection('bets'); const matchesCollectionRef = db.collection('matches'); const usersCollectionRef = db.collection('users'); const betDataMap = {}; // Record<string, BetData> await betsCollectionRef .where('matched', '==', false) .get() .then((betsSnapshot) => { betsSnapshot.forEach(betDoc => { betDataMap[betDoc.id] = betDoc.data(); }); }); const matchDataMap = {}; // Record<string, MatchData | undefined> // betIdList contains all IDs that will be processed const betIdList = Object.keys(betDataMap).filter(betId => { const betData = betDataMap[betId]; if (!betData) { console.log(`WARN: Skipped Bet #${betId} because it was falsy (actual value: ${betData})`); return false; } const matchId = betData.tradeMatchId; if (!matchId) { console.log(`WARN: Skipped Bet #${betId} because it had a falsy match ID (actual value: ${matchId})`); return false; } if (!betData.user) { console.log(`WARN: Skipped Bet #${betId} because it had a falsy user ID (actual value: ${userId})`); return false; } const stakeAsNumber = Number(betData.stake); // not using parseFloat as it's too lax if (isNaN(stakeAsNumber)) { console.log(`WARN: Skipped Bet #${betId} because it had an invalid stake value (original NaN value: ${betData.stake})`); return false; } matchDataMap[matchId] = undefined; // using undefined because its the result of `doc.data()` when the document doesn't exist return true; }); await fetchDocumentsById( matchesCollectionRef, Object.keys(matchIdMap), (matchDoc) => matchDataMap[matchDoc.id] = matchDoc.data() ); const batch = db.batch(); const queuedUpdates = 0; betIdList.forEach(betId => { const betData = betDataMap[betId]; const matchData = matchDataMap[betData.tradeMatchId]; if (matchData === undefined) { console.log(`WARN: Skipped /bets/${betId}, because it's linked match doesn't exist!`); continue; } if (matchData.matchStatus !== 'IN_PLAY') { console.log(`INFO: Skipped /bets/${betId}, because it's linked match status is not "IN_PLAY" (actual value: ${matchData.matchStatus})`); continue; } const betRef = betsCollectionRef.doc(betId); const betUserRef = usersCollectionRef.doc(betData.user); batch.update(betUserRef, { accountBalance: admin.firestore.FieldValue.increment(Number(betData.stake)) }); batch.update(betRef, { tradeCancelled: true }); queuedUpdates += 2; // for logging }); console.log(`INFO: Batch currently has ${queuedUpdates} queued`); // only uncomment when you are ready to make changes // batch.commit(); } Usage: axios.request(options) .then(function(response) { const data = response.data; console.log('INFO: Total matches count from API:' + data.matches.length); return applyBalanceChanges(); }
Moongose or mongo skip for pagination return empty array? [duplicate]
I am writing a webapp with Node.js and mongoose. How can I paginate the results I get from a .find() call? I would like a functionality comparable to "LIMIT 50,100" in SQL.
I'm am very disappointed by the accepted answers in this question. This will not scale. If you read the fine print on cursor.skip( ): The cursor.skip() method is often expensive because it requires the server to walk from the beginning of the collection or index to get the offset or skip position before beginning to return result. As offset (e.g. pageNumber above) increases, cursor.skip() will become slower and more CPU intensive. With larger collections, cursor.skip() may become IO bound. To achieve pagination in a scaleable way combine a limit( ) along with at least one filter criterion, a createdOn date suits many purposes. MyModel.find( { createdOn: { $lte: request.createdOnBefore } } ) .limit( 10 ) .sort( '-createdOn' )
After taking a closer look at the Mongoose API with the information provided by Rodolphe, I figured out this solution: MyModel.find(query, fields, { skip: 10, limit: 5 }, function(err, results) { ... });
Pagination using mongoose, express and jade - Here's a link to my blog with more detail var perPage = 10 , page = Math.max(0, req.params.page) Event.find() .select('name') .limit(perPage) .skip(perPage * page) .sort({ name: 'asc' }) .exec(function(err, events) { Event.count().exec(function(err, count) { res.render('events', { events: events, page: page, pages: count / perPage }) }) })
You can chain just like that: var query = Model.find().sort('mykey', 1).skip(2).limit(5) Execute the query using exec query.exec(callback);
In this case, you can add the query page and/ or limit to your URL as a query string. For example: ?page=0&limit=25 // this would be added onto your URL: http:localhost:5000?page=0&limit=25 Since it would be a String we need to convert it to a Number for our calculations. Let's do it using the parseInt method and let's also provide some default values. const pageOptions = { page: parseInt(req.query.page, 10) || 0, limit: parseInt(req.query.limit, 10) || 10 } sexyModel.find() .skip(pageOptions.page * pageOptions.limit) .limit(pageOptions.limit) .exec(function (err, doc) { if(err) { res.status(500).json(err); return; }; res.status(200).json(doc); }); BTW Pagination starts with 0
You can use a little package called Mongoose Paginate that makes it easier. $ npm install mongoose-paginate After in your routes or controller, just add : /** * querying for `all` {} items in `MyModel` * paginating by second page, 10 items per page (10 results, page 2) **/ MyModel.paginate({}, 2, 10, function(error, pageCount, paginatedResults) { if (error) { console.error(error); } else { console.log('Pages:', pageCount); console.log(paginatedResults); } }
Query: search = productName Params: page = 1 // Pagination router.get("/search/:page", (req, res, next) => { const resultsPerPage = 5; let page = req.params.page >= 1 ? req.params.page : 1; const query = req.query.search; page = page - 1 Product.find({ name: query }) .select("name") .sort({ name: "asc" }) .limit(resultsPerPage) .skip(resultsPerPage * page) .then((results) => { return res.status(200).send(results); }) .catch((err) => { return res.status(500).send(err); }); });
This is a example you can try this, var _pageNumber = 2, _pageSize = 50; Student.count({},function(err,count){ Student.find({}, null, { sort: { Name: 1 } }).skip(_pageNumber > 0 ? ((_pageNumber - 1) * _pageSize) : 0).limit(_pageSize).exec(function(err, docs) { if (err) res.json(err); else res.json({ "TotalCount": count, "_Array": docs }); }); });
Try using mongoose function for pagination. Limit is the number of records per page and number of the page. var limit = parseInt(body.limit); var skip = (parseInt(body.page)-1) * parseInt(limit); db.Rankings.find({}) .sort('-id') .limit(limit) .skip(skip) .exec(function(err,wins){ });
This is what I done it on code var paginate = 20; var page = pageNumber; MySchema.find({}).sort('mykey', 1).skip((pageNumber-1)*paginate).limit(paginate) .exec(function(err, result) { // Write some stuff here }); That is how I done it.
Simple and powerful pagination solution async getNextDocs(no_of_docs_required: number = 5, last_doc_id?: string) { let docs if (!last_doc_id) { // get first 5 docs docs = await MySchema.find().sort({ _id: -1 }).limit(no_of_docs_required) } else { // get next 5 docs according to that last document id docs = await MySchema.find({_id: {$lt: last_doc_id}}) .sort({ _id: -1 }).limit(no_of_docs_required) } return docs } last_doc_id: the last document id that you get no_of_docs_required: the number of docs that you want to fetch i.e. 5, 10, 50 etc. If you don't provide the last_doc_id to the method, you'll get i.e. 5 latest docs If you've provided the last_doc_id then you'll get the next i.e. 5 documents.
There are some good answers giving the solution that uses skip() & limit(), however, in some scenarios, we also need documents count to generate pagination. Here's what we do in our projects: const PaginatePlugin = (schema, options) => { options = options || {} schema.query.paginate = async function(params) { const pagination = { limit: options.limit || 10, page: 1, count: 0 } pagination.limit = parseInt(params.limit) || pagination.limit const page = parseInt(params.page) pagination.page = page > 0 ? page : pagination.page const offset = (pagination.page - 1) * pagination.limit const [data, count] = await Promise.all([ this.limit(pagination.limit).skip(offset), this.model.countDocuments(this.getQuery()) ]); pagination.count = count; return { data, pagination } } } mySchema.plugin(PaginatePlugin, { limit: DEFAULT_LIMIT }) // using async/await const { data, pagination } = await MyModel.find(...) .populate(...) .sort(...) .paginate({ page: 1, limit: 10 }) // or using Promise MyModel.find(...).paginate(req.query) .then(({ data, pagination }) => { }) .catch(err => { })
Here is a version that I attach to all my models. It depends on underscore for convenience and async for performance. The opts allows for field selection and sorting using the mongoose syntax. var _ = require('underscore'); var async = require('async'); function findPaginated(filter, opts, cb) { var defaults = {skip : 0, limit : 10}; opts = _.extend({}, defaults, opts); filter = _.extend({}, filter); var cntQry = this.find(filter); var qry = this.find(filter); if (opts.sort) { qry = qry.sort(opts.sort); } if (opts.fields) { qry = qry.select(opts.fields); } qry = qry.limit(opts.limit).skip(opts.skip); async.parallel( [ function (cb) { cntQry.count(cb); }, function (cb) { qry.exec(cb); } ], function (err, results) { if (err) return cb(err); var count = 0, ret = []; _.each(results, function (r) { if (typeof(r) == 'number') { count = r; } else if (typeof(r) != 'number') { ret = r; } }); cb(null, {totalCount : count, results : ret}); } ); return qry; } Attach it to your model schema. MySchema.statics.findPaginated = findPaginated;
Above answer's holds good. Just an add-on for anyone who is into async-await rather than promise !! const findAllFoo = async (req, resp, next) => { const pageSize = 10; const currentPage = 1; try { const foos = await FooModel.find() // find all documents .skip(pageSize * (currentPage - 1)) // we will not retrieve all records, but will skip first 'n' records .limit(pageSize); // will limit/restrict the number of records to display const numberOfFoos = await FooModel.countDocuments(); // count the number of records for that model resp.setHeader('max-records', numberOfFoos); resp.status(200).json(foos); } catch (err) { resp.status(500).json({ message: err }); } };
you can use the following line of code as well per_page = parseInt(req.query.per_page) || 10 page_no = parseInt(req.query.page_no) || 1 var pagination = { limit: per_page , skip:per_page * (page_no - 1) } users = await User.find({<CONDITION>}).limit(pagination.limit).skip(pagination.skip).exec() this code will work in latest version of mongo
A solid approach to implement this would be to pass the values from the frontend using a query string. Let's say we want to get page #2 and also limit the output to 25 results. The query string would look like this: ?page=2&limit=25 // this would be added onto your URL: http:localhost:5000?page=2&limit=25 Let's see the code: // We would receive the values with req.query.<<valueName>> => e.g. req.query.page // Since it would be a String we need to convert it to a Number in order to do our // necessary calculations. Let's do it using the parseInt() method and let's also provide some default values: const page = parseInt(req.query.page, 10) || 1; // getting the 'page' value const limit = parseInt(req.query.limit, 10) || 25; // getting the 'limit' value const startIndex = (page - 1) * limit; // this is how we would calculate the start index aka the SKIP value const endIndex = page * limit; // this is how we would calculate the end index // We also need the 'total' and we can get it easily using the Mongoose built-in **countDocuments** method const total = await <<modelName>>.countDocuments(); // skip() will return a certain number of results after a certain number of documents. // limit() is used to specify the maximum number of results to be returned. // Let's assume that both are set (if that's not the case, the default value will be used for) query = query.skip(startIndex).limit(limit); // Executing the query const results = await query; // Pagination result // Let's now prepare an object for the frontend const pagination = {}; // If the endIndex is smaller than the total number of documents, we have a next page if (endIndex < total) { pagination.next = { page: page + 1, limit }; } // If the startIndex is greater than 0, we have a previous page if (startIndex > 0) { pagination.prev = { page: page - 1, limit }; } // Implementing some final touches and making a successful response (Express.js) const advancedResults = { success: true, count: results.length, pagination, data: results } // That's it. All we have to do now is send the `results` to the frontend. res.status(200).json(advancedResults); I would suggest implementing this logic into middleware so you can be able to use it for various routes/ controllers.
You can do using mongoose-paginate-v2. For more info click here const mongoose = require('mongoose'); const mongoosePaginate = require('mongoose-paginate-v2'); const mySchema = new mongoose.Schema({ // your schema code }); mySchema.plugin(mongoosePaginate); const myModel = mongoose.model('SampleModel', mySchema); myModel.paginate().then({}) // Usage
I have found a very efficient way and implemented it myself, I think this way is the best for the following reasons: It does not use skip, which time complexity doesn't scale well; It uses IDs to query the document. Ids are indexed by default in MongoDB, making them very fast to query; It uses lean queries, these are known to be VERY performative, as they remove a lot of "magic" from Mongoose and returns a document that comes kind of "raw" from MongoDB; It doesn't depend on any third party packages that might contain vulnerabilities or have vulnerable dependencies. The only caveat to this is that some methods of Mongoose, such as .save() will not work well with lean queries, such methods are listed in this awesome blog post, I really recommend this series, because it considers a lot of aspects, such as type security (which prevents critical errors) and PUT/ PATCH. I will provide some context, this is a Pokémon repository, the pagination works as the following: The API receives unsafeId from the req.body object of Express, we need to convert this to string in order to prevent NoSQL injections (it could be an object with evil filters), this unsafeId can be an empty string or the ID of the last item of the previous page, it goes like this: /** * #description GET All with pagination, will return 200 in success * and receives the last ID of the previous page or undefined for the first page * Note: You should take care, read and consider about Off-By-One error * #param {string|undefined|unknown} unsafeId - An entire page that comes after this ID will be returned */ async readPages(unsafeId) { try { const id = String(unsafeId || ''); let criteria; if (id) { criteria = {_id: {$gt: id}}; } // else criteria is undefined // This query looks a bit redundant on `lean`, I just really wanted to make sure it is lean const pokemon = await PokemonSchema.find( criteria || {}, ).setOptions({lean: true}).limit(15).lean(); // This would throw on an empty page // if (pokemon.length < 1) { // throw new PokemonNotFound(); // } return pokemon; } catch (error) { // In this implementation, any error that is not defined by us // will not return on the API to prevent information disclosure. // our errors have this property, that indicate // that no sensitive information is contained within this object if (error.returnErrorResponse) { throw error; } // else console.error(error.message); throw new InternalServerError(); } } Now, to consume this and avoid Off-By-One errors in the frontend, you do it like the following, considering that pokemons is the Array of Pokémons documents that are returned from the API: // Page zero const pokemons = await fetchWithPagination({'page': undefined}); // Page one // You can also use a fixed number of pages instead of `pokemons.length` // But `pokemon.length` is more reliable (and a bit slower) // You will have trouble with the last page if you use it with a constant // predefined number const id = pokemons[pokemons.length - 1]._id; if (!id) { throw new Error('Last element from page zero has no ID'); } // else const page2 = await fetchWithPagination({'page': id}); As a note here, Mongoose IDs are always sequential, this means that any newer ID will always be greater than the older one, that is the foundation of this answer. This approach has been tested agaisnt Off-By-One errors, for instance, the last element of a page could be returned as the first element of the following one (duplicated), or an element that is between the last of the previous page and the first of the current page might disappear. When you are done with all the pages and request a page after the last element (one that does not exist), the response will be an empty array with 200 (OK), which is awesome!
The easiest and more speedy way is, paginate with the objectId Example; Initial load condition condition = {limit:12, type:""}; Take the first and last ObjectId from response data Page next condition condition = {limit:12, type:"next", firstId:"57762a4c875adce3c38c662d", lastId:"57762a4c875adce3c38c6615"}; Page next condition condition = {limit:12, type:"next", firstId:"57762a4c875adce3c38c6645", lastId:"57762a4c875adce3c38c6675"}; In mongoose var condition = {}; var sort = { _id: 1 }; if (req.body.type == "next") { condition._id = { $gt: req.body.lastId }; } else if (req.body.type == "prev") { sort = { _id: -1 }; condition._id = { $lt: req.body.firstId }; } var query = Model.find(condition, {}, { sort: sort }).limit(req.body.limit); query.exec(function(err, properties) { return res.json({ "result": result); });
The best approach (IMO) is to use skip and limit BUT within a limited collections or documents. To make the query within limited documents, we can use specific index like index on a DATE type field. See that below let page = ctx.request.body.page || 1 let size = ctx.request.body.size || 10 let DATE_FROM = ctx.request.body.date_from let DATE_TO = ctx.request.body.date_to var start = (parseInt(page) - 1) * parseInt(size) let result = await Model.find({ created_at: { $lte: DATE_FROM, $gte: DATE_TO } }) .sort({ _id: -1 }) .select('<fields>') .skip( start ) .limit( size ) .exec(callback)
Most easiest plugin for pagination. https://www.npmjs.com/package/mongoose-paginate-v2 Add plugin to a schema and then use model paginate method: var mongoose = require('mongoose'); var mongoosePaginate = require('mongoose-paginate-v2'); var mySchema = new mongoose.Schema({ /* your schema definition */ }); mySchema.plugin(mongoosePaginate); var myModel = mongoose.model('SampleModel', mySchema); myModel.paginate().then({}) // Usage
let page,limit,skip,lastPage, query; page = req.params.page *1 || 1; //This is the page,fetch from the server limit = req.params.limit * 1 || 1; // This is the limit ,it also fetch from the server skip = (page - 1) * limit; // Number of skip document lastPage = page * limit; //last index counts = await userModel.countDocuments() //Number of document in the collection query = query.skip(skip).limit(limit) //current page const paginate = {} //For previous page if(skip > 0) { paginate.prev = { page: page - 1, limit: limit } //For next page if(lastPage < counts) { paginate.next = { page: page + 1, limit: limit } results = await query //Here is the final results of the query.
const page = req.query.page * 1 || 1; const limit = req.query.limit * 1 || 1000; const skip = (page - 1) * limit; query = query.skip(skip).limit(limit);
This is example function for getting the result of skills model with pagination and limit options export function get_skills(req, res){ console.log('get_skills'); var page = req.body.page; // 1 or 2 var size = req.body.size; // 5 or 10 per page var query = {}; if(page < 0 || page === 0) { result = {'status': 401,'message':'invalid page number,should start with 1'}; return res.json(result); } query.skip = size * (page - 1) query.limit = size Skills.count({},function(err1,tot_count){ //to get the total count of skills if(err1) { res.json({ status: 401, message:'something went wrong!', err: err, }) } else { Skills.find({},{},query).sort({'name':1}).exec(function(err,skill_doc){ if(!err) { res.json({ status: 200, message:'Skills list', data: data, tot_count: tot_count, }) } else { res.json({ status: 401, message: 'something went wrong', err: err }) } }) //Skills.find end } });//Skills.count end }
Using ts-mongoose-pagination const trainers = await Trainer.paginate( { user: req.userId }, { perPage: 3, page: 1, select: '-password, -createdAt -updatedAt -__v', sort: { createdAt: -1 }, } ) return res.status(200).json(trainers)
Below Code Is Working Fine For Me. You can add finding filters also and user same in countDocs query to get accurate results. export const yourController = async (req, res) => { const { body } = req; var perPage = body.limit, var page = Math.max(0, body.page); yourModel .find() // You Can Add Your Filters inside .limit(perPage) .skip(perPage * (page - 1)) .exec(function (err, dbRes) { yourModel.count().exec(function (err, count) { // You Can Add Your Filters inside res.send( JSON.stringify({ Articles: dbRes, page: page, pages: count / perPage, }) ); }); }); };
You can write query like this. mySchema.find().skip((page-1)*per_page).limit(per_page).exec(function(err, articles) { if (err) { return res.status(400).send({ message: err }); } else { res.json(articles); } }); page : page number coming from client as request parameters. per_page : no of results shown per page If you are using MEAN stack following blog post provides much of the information to create pagination in front end using angular-UI bootstrap and using mongoose skip and limit methods in the backend. see : https://techpituwa.wordpress.com/2015/06/06/mean-js-pagination-with-angular-ui-bootstrap/
You can either use skip() and limit(), but it's very inefficient. A better solution would be a sort on indexed field plus limit(). We at Wunderflats have published a small lib here: https://github.com/wunderflats/goosepage It uses the first way.
If you are using mongoose as a source for a restful api have a look at 'restify-mongoose' and its queries. It has exactly this functionality built in. Any query on a collection provides headers that are helpful here test-01:~$ curl -s -D - localhost:3330/data?sort=-created -o /dev/null HTTP/1.1 200 OK link: </data?sort=-created&p=0>; rel="first", </data?sort=-created&p=1>; rel="next", </data?sort=-created&p=134715>; rel="last" ..... Response-Time: 37 So basically you get a generic server with a relatively linear load time for queries to collections. That is awesome and something to look at if you want to go into a own implementation.
app.get("/:page",(req,res)=>{ post.find({}).then((data)=>{ let per_page = 5; let num_page = Number(req.params.page); let max_pages = Math.ceil(data.length/per_page); if(num_page == 0 || num_page > max_pages){ res.render('404'); }else{ let starting = per_page*(num_page-1) let ending = per_page+starting res.render('posts', {posts:data.slice(starting,ending), pages: max_pages, current_page: num_page}); } }); });
Firebase cloud functions check db for non-existant data
I'm looking for how to check if a documents exists in my cloud functions My functions belows works fine when just incrementing an existing value, but now I'm trying to add functionality where it checks to see if the previous value exists and if it doesn't set as 1. I've tried a different methods but I get things like "snapshot.exists" or "TypeError: Cannot read property 'count' of undefined at docRef.get.then.snapshot var getDoc = docRef.get() .then(snapshot => { if (typeof snapshot._fieldsProto.count !== undefined) { console.log("haha3", snapshot._fieldsProto.count) var count = Number(jsonParser(snapshot._fieldsProto.count, "integerValue")); docRef.set({ count: count + 1 }); } else { docRef.set({ count: 1 }); } }); below is the code for the exists() error var getDoc = docRef.get() .then(snapshot => { if snapshot.exists() { console.log("haha3", snapshot._fieldsProto.count) var count = Number(jsonParser(snapshot._fieldsProto.count, "integerValue")); docRef.set({ count: count + 1 }); } else { docRef.set({ count: 1 }); } }); The error for this code is: TypeError: snapshot.exists is not a function at docRef.get.then.snapshot
It seems like docRef either points to a collection or is a query. In that case your snapshot is of type QuerySnapshot. To check if a query has any result, use QuerySnapshot.empty.
I kept getting errors saying either empty or exists were not functions (tried many iterations) so eventually I landed on just using an undefined check and it works perfectly. var db = event.data.ref.firestore; var docRef = db.collection(userID).doc("joined").collection("total").doc("count"); var getDoc = docRef.get() .then(snapshot => { console.log("augu1", snapshot) if (snapshot._fieldsProto === undefined) { console.log("digimon1") docRef.set({ count: 1 }); } else { console.log("haha31", snapshot._fieldsProto.count) var count = Number(jsonParser(snapshot._fieldsProto.count, "integerValue")); docRef.set({ count: count + 1 }); } });
It turns out the problem is much simpler than I imagined: DocumentSnapshot.exists is a read-only property, not a function. So the proper way to use it is: if snapshot.exists()
Limit number of records in firebase
Every minute I have a script that push a new record in my firebase database. What i want is delete the last records when length of the list reach a fixed value. I have been through the doc and other post and the thing I have found so far is something like that : // Max number of lines of the chat history. const MAX_ARDUINO = 10; exports.arduinoResponseLength = functions.database.ref('/arduinoResponse/{res}').onWrite(event => { const parentRef = event.data.ref.parent; return parentRef.once('value').then(snapshot => { if (snapshot.numChildren() >= MAX_ARDUINO) { let childCount = 0; let updates = {}; snapshot.forEach(function(child) { if (++childCount <= snapshot.numChildren() - MAX_ARDUINO) { updates[child.key] = null; } }); // Update the parent. This effectively removes the extra children. return parentRef.update(updates); } }); }); The problem is : onWrite seems to download all the related data every time it is triggered. This is a pretty good process when the list is not so long. But I have like 4000 records, and every month it seems that I screw up my firebase download quota with that. Does anyone would know how to handle this kind of situation ?
Ok so at the end I came with 3 functions. One update the number of arduino records, one totally recount it if the counter is missing. The last one use the counter to make a query using the limitToFirst filter so it retrieve only the relevant data to remove. It is actually a combination of those two example provided by Firebase : https://github.com/firebase/functions-samples/tree/master/limit-children https://github.com/firebase/functions-samples/tree/master/child-count Here is my final result const MAX_ARDUINO = 1500; exports.deleteOldArduino = functions.database.ref('/arduinoResponse/{resId}/timestamp').onWrite(event => { const collectionRef = event.data.ref.parent.parent; const countRef = collectionRef.parent.child('arduinoResCount'); return countRef.once('value').then(snapCount => { return collectionRef.limitToFirst(snapCount.val() - MAX_ARDUINO).transaction(snapshot => { snapshot = null; return snapshot; }) }); }); exports.trackArduinoLength = functions.database.ref('/arduinoResponse/{resId}/timestamp').onWrite(event => { const collectionRef = event.data.ref.parent.parent; const countRef = collectionRef.parent.child('arduinoResCount'); // Return the promise from countRef.transaction() so our function // waits for this async event to complete before it exits. return countRef.transaction(current => { if (event.data.exists() && !event.data.previous.exists()) { return (current || 0) + 1; } else if (!event.data.exists() && event.data.previous.exists()) { return (current || 0) - 1; } }).then(() => { console.log('Counter updated.'); }); }); exports.recountArduino = functions.database.ref('/arduinoResCount').onWrite(event => { if (!event.data.exists()) { const counterRef = event.data.ref; const collectionRef = counterRef.parent.child('arduinoResponse'); // Return the promise from counterRef.set() so our function // waits for this async event to complete before it exits. return collectionRef.once('value') .then(arduinoRes => counterRef.set(arduinoRes.numChildren())); } }); I have not tested it yet but soon I will post my result ! I also heard that one day Firebase will add a "size" query, that is definitely missing in my opinion.