I'm implementing a "lowest bid auction" system. The users will set a base price, a price drop amount and a price drop interval. For example if an user set base price = 1000$ , price drop amount = 100$ , price drop interval = 1 hour, after 3 hours the price will be 700$, after 3.2 hours the price will still be 700$, after 4 hours it will be 600$...
These prices are stored into a mongodb database and need to be queried, so calculating current prices in node.js after the database querying gets really expensive. Is there a way to tell mongodb to update each document at a given time interval? Should I implement a node.js microservice that keep track of all these timers, and updates the documents when needed?
In practice these times will be actually big ( hours usually ), but I want to keep track of a lot of them.
Thank You,
Matteo
If you're using nodejs you could use Agenda to do this: https://github.com/rschmukler/agenda
so you'd define a task to reduce the prices of all your items:
var mongoConnectionString = "mongodb://127.0.0.1/agenda";
var agenda = new Agenda({db: {address: mongoConnectionString}});
agenda.define('reduce prices', {priority: 'high', concurrency: 1}, function(job, done) {
// .. do your db query to reduce the price here
done(); // dont forget to call done!
});
and then invoke it once an hour:
agenda.on('ready', function() {
agenda.every('60 minutes', 'reduce prices');
agenda.start();
});
And that's it. If you're using express and put this in your express app then you don't need to run a separate cron task - however this query will run on the same process as the rest of your application, so depending on how many "product" objects you're reducing prices for, this may or may not be an issue.
Related
Versions: Keystone v4
I have a Mongo database with >20k items. What I want is a paginator that would allow the user to quickly scroll through the Mongo database 25 elements at a time. Currently, this feature is implemented, but the server takes >40 seconds to return the results because it queries the entire (20k item) database. However, only 25 elements are displayed on a single page, so I feel like if it just fetches 25 results instead of 20k, it should be quicker. How could I implement this? I know about the .limit() function, but I can't seem to figure out pagination in keystone while using that.
Current Code:
var q = Items.model.find();
q.exec(function(err, newss) {
console.log('There are %d', newss.length); // Prints out 20k number
...//skip
locals.cnts = newss;
// console.log(newss[0])
locals.pagerr = pager({
page: parseInt(req.query.page, 10) || 1,
perPage: 25,
total: newss.length
});
locals.itemsss = locals.cnts.slice(
locals.pagerr.first - 1,
locals.pagerr.last
);
next();
})
In it's current implmentation, it takes >40 seconds to return the paginated results. How can I fix this?
The model.find() function you're using here is equivalent to the Mongoose find() function. As you're calling it without any filters, this code is retrieving all 25k items from the database each time it runs. This data is being transferred to the web server/node process where the body of your function(err, newss) {...} function is run. Only then are the 25 items you're after being extracted from the set.
Instead, if you want to use offset-based pagination like this, you should be using the query.limit() and query.skip() functions. If you need to count the total items first, do so in a separate query using query.count().
I haven't tested this code (and it's been a while since I used Mongoose), but I think you want something like this:
// Warning! Untested example code
Items.model.find().count(function (err, count) {
console.log('There are %d', count);
locals.pager = pager({
page: parseInt(req.query.page, 10) || 1,
perPage: 25,
total: count
});
Items.model.find()
.skip(locals.pager.first)
.limit(25)
.exec(function(err, results) {
locals.results = results;
next();
});
});
On a more general note – if you like Keystone and want to use Mongo, keep an eye on the Keystone 6 updates. Keystone 6 uses Prisma 2 as it's ORM layer and they recently released support for Mongo. As soon as that functionality production ready, we'll be supporting it in Keystone too.
I just started exploring Stripe for handling payments in node.js. I want to create a payment system that goes as following:
I create a workspace and I start a Stripe subscription of 10 dollars / month.
When someone joins my workspace it will cost me 10 dollar / month extra.
So, when I want to add a person to my subscription. How would I handle this? I found the below function, but I was wondering two things:
How do I add one person to this subscription? It now says quantity: 2, but how do I simply increment 1 with every user?
in the example below I use ID 'sub_6OZnwv0DZBrrPt' to retrieve this Stripe subscription. I was wondering from where I can get this ID? I could save this subscription ID in the workspace mongo database document after I created this subscription, but I'm not sure if it is safe to keep it like that in my database? Let me know if you have any suggestions.
this is the function to update a subscription
stripe.subscriptions.update(
'sub_6OZnwv0DZBryPt',
{ quantity: 2 },
(err, subscription) => {
// asynchronously called
}
);
On 1.) you need to retrieve the current subscription based on the stored ID. This can be done as:
stripe.subscriptions.retrieve(
"sub_6OZnwv0DZBryPt",
function(err, subscription) {
// asynchronously called
}
);
The subscription object will have information about the current quantity (see the example response and doc on the subscription object). Which leads to your second question.
To retrieve the subscription you need to store the ID only. It's safe to do so, the ID is meaningless to others unless they have your test / live keys as well. Be sure you secure those keys, and feel free to store IDs like subscription_id, customer_id, etc.
Example: I have 200 000 users which have to be charged at 30 day after registration. They all have different registration date. So the date of charging will be different for every user.
After googling I found this library Node Schedule which can fire a function at a specific time. I setting the payment date for every user in a registration moment. For example, user registered at 17:36 so his payment date will be at 17:36 after 30 days.
With node-schedule the code looks like this:
var schedule = require('node-schedule');
schedule.scheduleJob(chargingDateFromDB, () => {
console.log('The payment is done and new date for charging is set);
});
In this way, I believe, node will store the information about every schedule in operation memory and will keep it for 30 days and after payment for next 30 days and so on...
So I have doubts because of performance. Is there a better way to implement monthly subscription? Is my way of charging users is comonlly use or there have to be more efficient way?
Let ONE schedule run daily, select users who needs to be charged, and charge.
My current database structure looks like this,
Which basically has a mobile_users table and and a top 100 table which will be the leaderboard.
Im trying to figure out how to write a cloud function that executes every minute that updates/populates the top100 table with the userid, earned_points from mobile_users and sort it by earned_points.
Should i add a rank field on this table or is there a way to order the table from asc/desc order based on mobile_users?
My current function looks like this
exports.dbRefOnWriteEvent = functions.database.ref('/mobile_user/{userId}/{earned_points}').onWrite(event => {
var ref = admin.database().ref("/top100");
ref.orderByChild("earned_points").once("value", function(dataSnapshot) {
var i = 0;
dataSnapshot.forEach(function(childSnapshot) {
var r = (dataSnapshot.numChildren() - i);
childSnapshot.ref.update({rank: r},function(error) {
if (error != null)
console.log("update error: " + error);
});
i++;
});
});
});
I have yet to figure out how to tell the cloud function to execute every minute. I am having trouble structuring these type of queries.
My function is also failing to populate the the top100 table with those 3 current users. I would appreciate if someone could point me in the right direction.
Create a http request function that will do your business.
Then use cron-job to call your http firebase function every minute : cron-job
Maybe you can have two root nodes in your database. One like the above and a second node that is called leaderboard.
That second node can be an array where the index reflects the rank and the name reflects the score.
Leaderboard
|-- [0]
|-- score: 5000
|-- uid: 4zzdawqeasdasq2w1
|---[1]
|-- score: 4990
|-- uid: 889asdas1891sadaw
Then when you get a new score, you update the user's node and then also update the leaderboard. Then you just grab the uid and look up the name from the user's node.
Like the other posters said, use a HTTP Firebase Cloud Function and a chron job.
However I would recommend you use a chron job just to keep the cloud function alive (look up cold start firebase functions), but also make a fetch request to trigger the cloud function from the front-end everytime a user plays the game and generates a score. Otherwise if you get 10 plays per minute and it only updates once a minute, that will not be great user experience for the players who expect a real time leaderboard.
I'm using jsforce to access salesforce using the bulk api. It has two ways of updating and deleting records. One is using the normal bulk api which means creating a job and batches:
var job = conn.bulk.createJob("Account", "delete");
var batch = job.createBatch();
var accounts = getAccountsByDate(jsforce.Date.TODAY);
batch.execute(accounts);
batch.on('response', function(rets) {
// do things
});
The other way is to the "query" interface like this:
conn.sobject('Account')
.find({ CreatedDate: jsforce.Date.TODAY })
.destroy(function(err, rets) {
// do things
});
The second way certainly seems easier but I can't get it to update or delete more than 10,000 records at a time, which appears to be a salesforce api limit on batch size. Note that using maxFetch property from jsforce appears to have no effect in this case.
So is it safe to assume that the query style interface only creates a single batch? The jsforce documentation is not clear on this point.
Currently the bulk.load() method in JSforce bulk api generates a job with one batch, so the limit of 10,000 per batch will be applied. It is also true when using find-and-destroy interface, which uses bulk.load() internally.
To avoid this limit you can create a job by bulk.createJob() and create several batches by job.createBatch(), then dispatch the records to delete into these batches so that each records will not exceed the limit.